A Computational Status Update for Exact Rational Mixed Integer Programming
TTakustr. 714195 BerlinGermany
Zuse Institute Berlin L EON E IFLER , A
MBROS G LEIXNER
A Computational Status Update forExact Rational Mixed IntegerProgramming
This paper has been accepted to IPCO 2021. Please cite as:
Leon Eifler, Ambros Gleixner, A computational Status Update for Exact Rational Mixed Integer Programming.Accepted for publication in Integer Programming and Combinatorial Optimization: 22nd International Conference, IPCO 2021
ZIB Report 21-04 (January 2021) a r X i v : . [ m a t h . O C ] J a n Computational Status Update for ExactRational Mixed Integer Programming (cid:63)
Leon Eifler − − − and Ambros Gleixner , − − − Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany { eifler,gleixner } @zib.de HTW Berlin, Ostendstraße 1, 12459 Berlin, Germany
Abstract.
The last milestone achievement for the roundoff-error-freesolution of general mixed integer programs over the rational numberswas a hybrid-precision branch-and-bound algorithm published by Cook,Koch, Steffy, and Wolter in 2013. We describe a substantial revision andextension of this framework that integrates symbolic presolving, featuresan exact repair step for solutions from primal heuristics, employs a fasterrational LP solver based on LP iterative refinement, and is able to pro-duce independently verifiable certificates of optimality. We study thesignificantly improved performance and give insights into the computa-tional behavior of the new algorithmic components. On the MIPLIB 2017benchmark set, we observe an average speedup of 6 .
6x over the originalframework and 2 . It is widely accepted that mixed integer programming (MIP) is a powerful toolfor solving a broad variety of challenging optimization problems and that state-of-the-art MIP solvers are sophisticated and complex computer programs. How-ever, virtually all established solvers today rely on fast floating-point arithmetic.Hence, their theoretical promise of global optimality is compromised by roundofferrors inherent in this incomplete number system. Though tiny for each singlearithmetic operation, these errors can accumulate and result in incorrect claimsof optimality for suboptimal integer assignments, or even incorrect claims ofinfeasibility. Due to the nonconvexity of MIP, even performing an a posteriorianalysis of such errors or postprocessing them becomes difficult.In several applications, these numerical caveats can become actual limita-tions. This holds in particular when the solution of mixed integer programs isused as a tool in mathematics itself. Examples of recent work that employs MIPto investigate open mathematical questions include [11,12,18,28,29,32]. Some ofthese approaches are forced to rely on floating-point solvers because the availabil-ity, the flexibility, and most importantly the computational performance of MIP (cid:63)
The work for this article has been conducted within the Research Campus Modalfunded by the German Federal Ministry of Education and Research (BMBF grantnumbers 05M14ZAM, 05M20ZBM). Leon Eifler and Ambros Gleixner solvers with numerically rigorous guarantees is currently limited. This makesthe results of these research efforts not as strong as they could be. Examplesfor industrial applications where the correctness of results is paramount includehardware verification [1] or compiler optimization [35].The milestone paper by Cook, Koch, Steffy, and Wolter [16] presents a hybrid-precision branch-and-bound implementation that can still be considered the stateof the art for solving general mixed integer programs exactly over the rationalnumbers. It combines symbolic and numeric computation and applies differentdual bounding methods [19,31,33] based on linear programming (LP) in orderto dynamically trade off their speed against robustness and quality.However, beyond advanced strategies for branching and bounding, [16] doesnot include any of the supplementary techniques that are responsible for thestrong performance of floating-point MIP solvers today. In this paper, we makea first step to address this research gap in two main directions.First, we incorporate a symbolic presolving phase, which safely reduces thesize and tightens the formulation of the instance to be passed to the branch-and-bound process. This is motivated by the fact that presolving has been identifiedby several authors as one of the components—if not the component—with thelargest impact on the performance of floating-point MIP solvers [2,4]. To the bestof our knowledge, this is the first time that the impact of symbolic preprocessingroutines for general MIP is analyzed in the literature.Second, we complement the existing dual bounding methods by enabling theuse of primal heuristics . The motivation for this choice is less to reduce thetotal solving time, but rather to improve the usability of the exact MIP codein practical settings where finding good solutions earlier may be more relevantthan proving optimality eventually. Similar to the dual bounding methods, wefollow a hybrid-precision scheme. Primal heuristics are exclusively executed onthe floating-point approximation of the rational input data. Whenever they pro-duce a potentially improving solution, this solution is checked for approximatefeasibility in floating-point arithmetic. If successful, the solution is postprocessedwith an exact repair step that involves an exact LP solve.Moreover, we integrate the exact LP solver SoPlex, which follows the recentlydeveloped scheme of
LP iterative refinement [23], we extend the logging of cer-tificates in the recently developed VIPR format to all available dual boundingmethods [13], and produce a thoroughly revised implementation of the originalframework [16], which improves multiple technical details. Our computationalstudy evaluates the performance of the new algorithmic aspects in detail andindicates a significant overall speedup compared to the original framework.The overarching goal and contribution of this research is to extend the com-putational practice of MIP to the level of rigor that has been achieved in recentyears, for example, by the field of satisfiability solving [34], while at the same timeretaining most of the computational power embedded in floating-point solvers.In MIP, a similar level of performance and rigor is certainly much more diffi-cult to reach in practice, due to the numerical operations that are inherentlyinvolved in solving general mixed integer programs. However, we believe that
Computational Status Update for Exact Rational MIP 3 there is no reason why this vision should be fundamentally out of reach for therich machinery of MIP techniques developed over the last decades. The goal ofthis paper is to demonstrate the viability of this agenda within a first, smallselection of methods. The resulting code is freely available for research purposesas an extension of
SCIP
In the following, we describe related work in numerically exact optimization, in-cluding the main ideas and features of the framework that we build upon. Beforeturning to the most general case, we would like to mention that roundoff-error-free methods are available for several specific classes of pure integer problems.One example for such a combinatorial optimization problem is the travelingsalesman problem, for which the branch-and-cut solver Concorde applies safeinterval-arithmetic to postprocess LP relaxation solutions and ensures the valid-ity of domain-specific cutting planes by their combinatorial structure [5].A broader class of such problems, on binary decision variables, is addressed in satisfiability solving (SAT) and pseudo-Boolean optimization (PBO) [10]. Solversfor these problem classes usually do not suffer from numerical errors and oftensupport solver-independent verification of results [34]. While optimization vari-ants exist, the development of these methods is to a large extent driven byfeasibility problems. The broader class of solvers for satisfiability modulo theo-ries (SMT), e.g., [30], may also include real-valued variables, in particular forsatisfiability modulo the theory of linear arithmetic. However, as pointed outalso in [20], the target applications of SMT solvers differ significantly from themotivating use cases in LP and MIP.Exact optimization over convex polytopes intersected with lattices is alsosupported by some software libraries for polyhedral analysis [7,8]. These toolsare not particularly targeted towards solving LPs or MIPs of larger scale andusually follow the naive approach of simply executing all operations symbolically,in exact rational arithmetic. This yields numerically exact results and can evenbe highly efficient as long as the size of problems or the encoding length ofintermediate numbers is limited. However, as pointed out by [19] and [16], this purely symbolic approach quickly becomes prohibitively slow in general.By contrast, the most effective methods in the literature rely on a hybridapproach and combine exact and numeric computation. For solving pure LPsexactly, the most recent methods that follow this paradigm are incremental pre-cision boosting [6] and
LP iterative refinement [23]. In an exact MIP solver,however, it is not always necessary to solve LP relaxations completely, but itoften suffices to provide dual bounds that underestimate the optimal relaxationvalue safely. This can be achieved by postprocessing approximate LP solutions.
Bound-shift [31] is such a method that only relies on directed rounding and in-terval arithmetic and is therefore very fast. However, as the name suggests itrequires upper and lower bounds on all variables in order to be applicable. Amore widely applicable bounding method is project-and-shift [33], which uses an
Leon Eifler and Ambros Gleixner interior point or ray of the dual LP. These need to be computed by solving anauxiliary LP exactly in advance, though only once per MIP solve. Subsequently,approximate dual LP solutions can be corrected by projecting them to the feasi-ble region defined by the dual constraints and shifting the result to satisfy signconstraints on the dual multipliers.The hybrid branch-and-bound method of [16] combines such safe dual bound-ing methods with a state-of-the-art branching heuristic, reliability branching [3].It maintains both the exact problem formulationmin { c T x | Ax ≥ b, x ∈ Q n , x i ∈ Z ∀ i ∈ I} with rational input data A ∈ Q m × n , c ∈ Q n , b ∈ Q m , as well as a floating-pointapproximation with data ¯ A, ¯ b, ¯ c , which are defined as the componentwise clos-est numbers representable in floating-point arithmetic. The set I ⊆ { , . . . , n } contains the indices of integer variables.During the solve, for all LP relaxations, the floating-point approximationis first solved in floating-point arithmetic as an approximation and then post-processed to generate a valid dual bound. The methods available for this safebounding step are the previously described bound-shift [31], project-and-shift [33], and an exact LP solve with the exact LP solver QSopt ex based on incre-mental precision boosting [6]. (Further dual bounding methods were tested, butreported as less important in [16].) On the primal side, all solutions are checkedfor feasibility in exact arithmetic before being accepted.Finally, this exact MIP framework was recently extended by the possibilityto generate a certificate of correctness [13]. This certificate is a tree-less en-coding of the branch-and-bound search, with a set of dual multipliers to provethe dual bound at each node or its infeasibility. Its correctness can be verifiedindependently of the solving process using the checker software VIPR [14].
The exact MIP solver presented here extends [16] in four ways: the additionof a symbolic presolving phase, the execution of primal floating-point heuristicscoupled with an exact repair step, the use of a recently developed exact LP solverbased on LP iterative refinement, and a generally improved integration of theexact solving routines into the core branch-and-bound algorithm.
Symbolic presolving.
The first major extension is the addition of symbolicpresolving. To this end, we integrate the newly available presolving library
Pa-PILO [25] for integer and linear programming.
PaPILO has several benefits forour purposes.First, its code base is by design fully templatized with respect to the arith-metic type. This enables us to integrate it with rational numbers as data typefor storing the MIP data and all its computations. Second, it provides a largerange of presolving techniques already implemented. The ones used in our exactframework are coefficient strengthening, constraint propagation, implicit integer
Computational Status Update for Exact Rational MIP 5 detection, singleton column detection, substitution of variables, simplificationof inequalities, parallel row detection, sparsification, probing, dual fixing, dualinference, singleton stuffing, and dominated column detection. For a detailedexplanation of these methods, we refer to [2]. Third,
PaPILO comes with asophisticated parallelization scheme that helps to compensate for the increasedoverhead introduced by the use of rational arithmetic. For details see [21].When
SCIP enters the presolving stage, we pass a rational copy of the prob-lem to
PaPILO , which executes its presolving routines iteratively until no suffi-ciently large reductions are found. Subsequently, we extract the postsolving in-formation provided by
PaPILO to transfer the model reductions to
SCIP . Theseinclude fixings, aggregations, and bound changes of variables and strengtheningor deletion of constraints, all of which are performed in rational arithmetic.
Primal heuristics.
The second extension is the safe activation of
SCIP ’sfloating-point heuristics and the addition of an exact repair heuristic for their ap-proximate solutions. Heuristics are not known to reduce the overall solving timedrastically, but they can be particularly useful on hard instances that cannot besolved at all, and in order to avoid terminating without a feasible solution.In general, activating
SCIP ’s floating-point heuristics does not interfere withthe exactness of the solving process, although care has to be taken that nochanges to the model are performed, e.g., the creation of a no-good constraint.However, the chance that these heuristics find a solution that is feasible in theexact sense can be low, especially if equality constraints are present in the model.Thus, we postprocess solutions found by floating-point heuristics in the followingway. First, we fix all integer variables to the values found by the floating-pointheuristic, rounding slightly fractional values to their nearest integer. Then an ex-act LP is solved for the remaining continuous subproblem. If that LP is feasible,this produces an exactly feasible solution to the mixed integer program.Certainly, frequently solving this subproblem exactly can create a signifi-cant overhead compared to executing a floating-point heuristic alone, especiallywhen a large percentage of the variables is continuous and thus cannot be fixed.Therefore, we impose working limits on the frequency of running the exact repairheuristic, which are explained in more detail in Sec. 4.
LP iterative refinement.
Exact linear programming is a crucial part of theexact MIP solving process. Instead of
QSopt ex , we use
SoPlex as the exactlinear programming solver. The reason for this change is that
SoPlex uses LPiterative refinement [24] as the strategy to solve LPs exactly, which comparesfavorably against incremental precision boosting [23].
Further enhancements.
We improved several algorithmic details in the imple-mentation of the hybrid branch-and-bound method. We would like to highlighttwo examples for these changes. First, we enable the use of an objective limit in the floating-point LP solver, which was not possible in the original frame-work. Passing the primal bound as an objective limit to the floating-point LPsolver allows the LP solver to stop early just after its dual bound exceeds theglobal primal bound. However, if the overlap is too small, postprocessing this
Leon Eifler and Ambros Gleixner
LP solution with safe bounding methods can easily lead to a dual bound that nolonger exceeds the objective limit. For this reason, before installing the primalbound as an objective limit in the LP solver, we increase it by a small amountcomputed from the statistically observed bounding error so far. Only when safedual bounding fails, the objective limit is solved again without objective limit.Second, we reduce the time needed for checking exact feasibility of primalsolutions by prepending a safe floating-point check. Although checking a singlesolution for feasibility is fast, this happens often throughout the solve and doingso repeatedly in exact arithmetic can become computationally expensive. To im-plement such a safe floating-point check, we employ running error analysis [27].Let x ∗ ∈ Q n be a potential solution and let ¯ x ∗ be the floating-point approxi-mation of x ∗ . Let a ∈ Q n be a row of A with floating-point approximation ¯ a ,and right hand side b j ∈ Q . Instead of computing (cid:80) ni =1 a i x ∗ i symbolically, we in-stead compute (cid:80) ni =1 ¯ a i ¯ x ∗ i in floating-point arithmetic, and alongside compute abound on the maximal rounding error that may occur. We adjust the running er-ror analysis described in [27, Alg. 3.2] to also account for roundoff errors | ¯ x ∗ − x ∗ | and | ¯ a − a | . After doing this computation, we can check if either s − µ ≥ b j or s + µ ≤ b j . In the former, the solution x ∗ is guaranteed to fulfill (cid:80) ni =1 a i x ∗ i ≥ b j ;in the latter, we can safely determine that the inequality is violated; only ifneither case occurs, we recompute the activity in exact arithmetic.We note that this could alternatively be achieved by directed rounding, whichwould give tighter error bounds at a slightly increased computational effort.However, empirically we have observed that most equality or inequality con-straints are either satisfied at equality, where an exact arithmetic check cannotbe avoided, or they are violated or satisfied by a slack larger than the errorbound µ , hence the running error analysis is sufficient to determine feasibility. We conduct a computational analysis to answer three main questions.
First, howdoes the revised branch-and-bound framework compare to the previous implemen-tation, and to which components can the changes be attributed?
To answer thisquestion, we compare the original framework [16] against our improved imple-mentation, including the exact LP solver
SoPlex , but with primal heuristicsand exact presolving still disabled. In particular, we analyze the importance andperformance of the different dual bounding methods.
Second, what is the impact of the new algorithmic components symbolic pre-solving and primal heuristics?
To answer this question, we compare their im-pact on the solving time and the number of solved instances, as well as presentmore in-depth statistics, such as e.g., the primal integral [9] for heuristics or thenumber of fixings for presolving. In addition, we compare the effectiveness ofperforming presolving in rational and in floating-point arithmetic.
Finally, what is the overhead for producing and verifying certificates?
Here,we consider running times for both the solver and the certificate checker, as wellas the overhead in the safe dual bounding methods introduced through enabling
Computational Status Update for Exact Rational MIP 7 certificates. This provides an update for the analysis in [13], which was limitedto the two bounding methods project-and-shift and exact LP.The experiments were performed on a cluster of Intel Xeon CPUs E5-2660with 2.6 GHz and 128 GB main memory. As in [16], we use
CPLEX as floating-point LP solver. Due to compatibility issues, we needed to use
CPLEX
CPLEX
QSopt ex version as in [16] and
SoPlex
PaPILO fpeasy ), and one set of 50 instances that were foundto be numerically challenging, e.g., due to poor conditioning or large coefficientranges ( numdiff ). For a detailed description of the selection criteria, we referto [16]. To complement these test sets with a set of more ambitious and recentinstances, we conduct a final comparison on the MIPLIB 2017 [22] benchmarkset. All experiments to evaluate the new code are run with three different randomseeds, where we treat each instance-seed combination as a single observation. Asthis feature is not available in the original framework, all comparisons with theoriginal framework were performed with one seed. The time limit was set to7200 seconds for all experiments. If not stated otherwise all aggregated numbersare shifted geometric means with a shift of 0 .
001 seconds or 100 branch-and-bound nodes, respectively.
The branch-and-bound framework.
As a first step, we compare the behaviorof the safe branch-and-bound implementation from [16] with
QSopt ex as theexact LP solver, against its revised implementation with
SoPlex fpeasy and 7 more on numdiff . On fpeasy , we observe a reduction of 69 .
8% insolving time and of 87 .
3% in safe dual bounding time. On numdiff , we observea reduction of 80 .
3% in solving time, and of 88 .
3% in the time spent overallin the safe dual bounding methods. We also see this significant performanceimprovement reflected in the two performance profiles in Fig. 1.
Leon Eifler and Ambros Gleixner
Table 1: Comparison of original and new framework with presolving and primalheuristics disabled original framework new frameworkTest set size solved time nodes dbtime solved time nodes dbtime fpeasy
55 45 128 . . . . . . numdiff
21 13 237 . . . . . . Table 2: Comparison of safe dual bounding techniques original framework new frameworkTest set stats bshift pshift exlp bshift pshift exlp fpeasy calls/node 0 .
92 0 .
44 0 .
28 0 .
53 0 .
39 0 . . . .
050 0 . . . numdiff calls/node 0 .
78 0 .
36 0 .
52 0 .
36 0 .
39 0 . . . . . . . We identify a more aggressive use of project-and-shift and faster exact LPsolves as the two key factors for this improvement. In the original framework,project-and-shift is restricted to instances that had less than 10000 nonzeros.One reason for this limit is that a large auxiliary LP has to be solved by theexact LP solver to compute the relative interior point in project-and-shift. Withthe improvements in exact LP performance, it proved beneficial to remove thisworking limit in the new framework.The effect of this change can also be seen in the detailed analysis of boundingtimes given in Table 2. For calls per node and the fraction of bounding time pertotal solving time, which are normalized well, we report the arithmetic means; fortime per call, we report geometric means over all instances where the respectivebounding method was called at least once.The fact that time per call for project-and-shift (“pshift”) in the new frame-work increased by a factor of 3 ( fpeasy ) and 9 . numdiff ) is for the reasondiscussed above—it is now also called on larger instances. This is beneficialoverall since it replaces many slower exact LP calls. The decrease in exact LPsolving time per call (“exlp”) by a factor of 2 . numdiff ) and 5 ( fpeasy ) canalso partly be explained by this change, and partly by an overall performanceimprovement in exact LP solving due to the use of LP iterative refinement [24].The increase in bound-shift time (“bshift”) is due to implementation details,that will be addressed in future versions of the code, but its fraction of thetotal solving time is still relatively low. Finally, we observe a decrease in thetotal number of safe bounding calls per node. One reason is that we now disablebound-shift dynamically if its success rate drops below 20%. Computational Status Update for Exact Rational MIP 9 at most this factor of best0102030405060 nu m b e r o f i n s t a n ce s new frameworkoriginal framework 10 at most this factor of best0510152025 nu m b e r o f i n s t a n ce s new frameworkoriginal framework Fig. 1: Performance profiles comparing solving time of original and new frame-work without presolving and heuristics for fpeasy (left) and numdiff (right)Overall, we see a notable speedup and more solved instances, mainly due tothe better management of dual bounding methods and faster exact LP solving.
Symbolic presolving.
Before measuring the overall performance impact of ex-act presolving, we address the question how effective and how expensive presolv-ing in rational arithmetic is compared to standard floating-point presolving. Forboth variants, we configured
PaPILO to use the same tolerances for determiningwhether a reduction found is strong enough to be accepted. The only differencein the rational version is that all computations are done in exact arithmetic andthe tolerance to compare numbers and the feasibility tolerance are zero. Notethat a priori it is unclear whether rational presolving yields more or less reduc-tions. Dominated column detection may be less successful due to the strictercomparison of coefficients; the dual inference presolver might be more successfulif it detects earlier that a dual multiplier is strictly bounded away from zero.Table 3 presents aggregated results for presolving time, the number of pre-solving rounds, and the number of found fixings, aggregations, and bound changes.We use a shift of 1 for the geometric means of rounds, aggregations, fixings, andbound changes to account for instances where presolving found no such reduc-tions.Remarkably, both variants yield virtually the same results on fpeasy . On numdiff , there are small differences, with a slight decrease in the number offixings and aggregations and a slight increase in the number of bound changesfor exact variant. The time spent for exact presolving increases by more thanan order of magnitude but symbolic presolving is still not a performance bottle-neck. It consumed only 0 .
86% ( fpeasy ) and 2 .
1% ( numdiff ) of the total solvingtime, as seen in Table 4. Exploiting parallelism in presolving provided no mea-sureable benefit for floating-point presolving, but reduced symbolic presolvingtime by 44% ( fpeasy ) to 43 .
8% ( numdiff ). However, this benefit can be morepronounced on individual instances, e.g., on nw04 , where parallelization reducesthe time for rational presolving by a factor of 6 . Table 3: Comparison of exact and floating-point presolving floating-point presolving exact presolvingTest set thrds time rnds fixed agg bdchg time rnds fixed agg bdchg fpeasy .
01 3 . . . . .
25 3 . . . .
420 0 .
01 3 . . . . .
14 3 . . . . numdiff .
04 8 . . . . .
89 7 . . . .
820 0 .
04 8 . . . . .
50 7 . . . . Table 4: Comparison of new framework with and without presolving (3 seeds) presolving disabled presolving enabledTest set size solved time nodes solved time (presolving) nodes fpeasy
168 165 42 . . . . numdiff
91 66 216 . . . . mance with presolving enabled. The results for all instances that could be solvedto optimality by at least one setting are presented in Table 4. Enabling presolvingsolves 3 more instances on fpeasy and 10 more instances on numdiff . We ob-serve a reduction in solving time of 39 .
4% ( fpeasy ) and 72 .
9% ( numdiff ). Thestronger impact on numdiff is correlated with the larger number of reductionsobserved in Table 3.
Primal heuristics.
To improve primal performance, we enabled all SCIP heuris-tics that the floating-point version executes by default. To limit the fraction ofsolving time for the repair heuristic described in Section 3, the repair heuristicis only allowed to run at any point in the solve, if it was called at most halfas often as the exact LP calls for safe dual bounding. Furthermore, the repairheuristic is disabled on instances with more than 80% continuous variables, sincethe overhead of the exact LP solves can drastically worsen the performance onthose instances. Whenever the repair step is not executed, the floating-pointsolutions are checked directly for exact feasibility.First, we evaluate the cost and success of the exact repair heuristic over allinstances where it was called at least once. The results are presented in Table5. The repair heuristic is effective at finding feasible solutions with a successrate of 46 .
9% ( fpeasy ) and 25 .
6% ( numdiff ). The fraction of the solving timespent in the repair heuristic is well below 1%. Nevertheless, the strict workinglimits we imposed are necessary since there exist outliers for which the repairheuristic takes more than 5% of the total solving time, and performance on theseinstances would quickly deteriorate if the working limits were relaxed.Table 6 shows the overall performance impact of enabling heuristics over allinstances that could be solved by at least one setting. On both sets, we see almostno change in total solving time. On fpeasy , the time to find the first solution
Computational Status Update for Exact Rational MIP 11
Table 5: Statistics of repair heuristic for instances where repair step was called timeTest set size total solving repair fail success success rate fpeasy
82 39 . numdiff
42 383 . Table 6: Comparison of new framework with and without primal heuristics(3 seeds, presolving enabled, instances where repair step was called) heuristics disabled heuristics enabledTest set size solv. time time-to-first primal int. solv. time time-to-first primal int. fpeasy
82 32 . . . . numdiff
41 101 . . . . decreases by 86 .
7% and the primal integral decreases by 13 . . . numdiff is expected, consideringthat this test set was curated to contain instances with numerical challenges. Onthose instances floating-point heuristics find solutions that might either not befeasible in exact arithmetic or are not possible to fix for the repair heuristic. Inboth test sets, the repair heuristic was able to find solutions, while not imposingany significant overhead in solving time. Producing and verifying certificates.
The possibility to log certificates aspresented in [13] is available in the new framework and is extended to also workwhen the dual bounding method bound-shift is active. Presolving must currentlybe disabled, since
PaPILO does not yet support generation of certificates.Besides ensuring correctness of results, certificate generation is valuable toensure correctness of the solver. Although it does not check the implementationitself, it can help identify and eliminate incorrect results that do not directlylead to fails. For example, on instance x numdiff , the original frameworkclaimed infeasibility at the root node, and while the instance is indeed infeasible,we found the reasoning for this to be incorrect due to the use of a certificate.Table 7 reports the performance overhead when enabling certificates. Herewe only consider instances that were solved to optimality by both versions sincetimeouts would bias the results in favor of the certificate. We see an increase insolving time of 101 .
2% on fpeasy and of 51 .
4% on numdiff . This confirms themeasurements presented in [13]. The increase is explained in part by the effort tokeep track of the tree structure and print the exact dual multipliers, and in partby an increase in dual bounding time. The reason for the latter is that bound-shiftby default only provides a safe objective value. The dual multipliers needed for
Table 7: Overhead for producing and verifying certificates on instances solvedby both variants certificate disabled certificate enabledTest set size solving time dbtime solving time dbtime check time overhead fpeasy
53 32 . . . . . numdiff
21 41 . . . . . Table 8: Comparison on MIPLIB 2017 benchmark set original framework new frameworkTest set size solved found time gap solved found time gapall 240 17 74 6003 . ∞
47 167 3928 . ∞ both 66 16 66 4181 . . . . . ∞
47 47 505 . ∞ the certificate must be computed in a postprocessing step, which introduces theoverhead in safe bounding time. This overhead is larger on fpeasy , since bound-shift is called more often. The time spent in the verification of the certificate ison average significantly lower than the time spent in the solving process. Overall,the overhead from printing and checking certificates is not negligible, but neitherdoes it drastically impact the solvability of instances. Performance comparison on MIBLIB 2017.
As a final experiment, wewanted to evaluate the performance on a more ambitious and diverse test set.To that end, we ran both the original framework and the revised frameworkwith presolving and heuristics enabled on the recent MIPLIB 2017 benchmarkset. The results in Table 8 show that the new framework solved 30 instancesmore and the mean solving time decreased by 84 .
8% on the subset “onesolved”of instances that could be solved to optimality by at least one solver. On morethan twice as many instances at least one primal solution was found (167 vs.74). On the subset of 66 instances that had a finite gap for both versions, thenew algorithm achieved a gap of 33 .
8% in arithmetic mean compared to 67 . Acknowledgements.
We wish to thank Dan Steffy for valuable discussions on therevision of the original branch-and-bound framework, Leona Gottwald for creating
PaPILO , and Antonia Chmiela for help with implementing the primal repair heuristic. Computational Status Update for Exact Rational MIP 13
References
1. Achterberg, T.: Constraint Integer Programming. Ph.D. thesis, Technische Uni-versit¨at Berlin (2007)2. Achterberg, T., Bixby, R.E., Gu, Z., Rothberg, E., Weninger, D.: Presolve reduc-tions in mixed integer programming. INFORMS Journal on Computing (2),473–506 (2020). https://doi.org/10.1287/ijoc.2018.08573. Achterberg, T., Koch, T., Martin, A.: Branching rules revisited. Operations Re-search Letters (1), 42–54 (2005). https://doi.org/10.1016/j.orl.2004.04.0024. Achterberg, T., Wunderling, R.: Mixed integer programming: Analyzing 12 years ofprogress. In: J¨unger, M., Reinelt, G. (eds.) Facets of Combinatorial Optimization.pp. 449–481 (2013). https://doi.org/10.1007/978-3-642-38189-8 185. Applegate, D., Bixby, R., Chvatal, V., Cook, W.: Concorde TSP Solver (2006)6. Applegate, D., Cook, W., Dash, S., Espinoza, D.G.: Exact solutions to linearprogramming problems. Operations Research Letters (6), 693 – 699 (2007).https://doi.org/10.1016/j.orl.2006.12.0107. Assarf, B., Gawrilow, E., Herr, K., Joswig, M., Lorenz, B., Paffenholz, A., Rehn, T.:Computing convex hulls and counting integer points with polymake . Mathemati-cal Programming Computation (1), 1–38 (2017). https://doi.org/10.1007/s12532-016-0104-z8. Bagnara, R., Hill, P.M., Zaffanella, E.: The Parma Polyhedra Library: Toward acomplete set of numerical abstractions for the analysis and verification of hardwareand software systems. Science of Computer Programming (1–2), 3–21 (2008)9. Berthold, T.: Measuring the impact of primal heuristics. Operations Research Let-ters (6), 611 – 614 (2013). https://doi.org/10.1016/j.orl.2013.08.00710. Biere, A., Heule, M., van Maaren, H., Walsh, T.: Handbook of Satisfiability: Volume185 Frontiers in Artificial Intelligence and Applications. IOS Press, NLD (2009)11. Bofill, M., Many`a, F., Vidal, A., Villaret, M.: New complexity re-sults for (cid:32)Lukasiewicz logic. Soft Computing , 2187–2197 (2019).https://doi.org/10.1007/s00500-018-3365-912. Burton, B.A., Ozlen, M.: Computing the crosscap number of a knot using integerprogramming and normal surfaces. ACM Transactions on Mathematical Software (1) (2012). https://doi.org/10.1145/2382585.238258913. Cheung, K.K., Gleixner, A., Steffy, D.E.: Verifying Integer Programming Results.In: International Conference on Integer Programming and Combinatorial Opti-mization. pp. 148–160. Springer (2017). https://doi.org/10.1007/978-3-319-59250-3 1314. Cheung, K., Gleixner, A., Steffy, D.: VIPR. Verifying Integer Programming Re-sults. https://github.com/ambros-gleixner/VIPR (accessed November 11, 2020)15. Cook, W., Dash, S., Fukasawa, R., Goycoolea, M.: Numerically safe gomorymixed-integer cuts. INFORMS Journal on Computing , 641–649 (11 2009).https://doi.org/10.1287/ijoc.1090.032416. Cook, W., Koch, T., Steffy, D.E., Wolter, K.: A hybrid branch-and-bound approachfor exact rational mixed-integer programming. Mathematical Programming Com-putation (3), 305 – 344 (2013). https://doi.org/10.1007/s12532-013-0055-617. Eifler, L., Gleixner, A.: Exact SCIP - a development version. https://github.com/leoneifler/exact-SCIP (accessed November 11, 2020)18. Eifler, L., Gleixner, A., Pulaj, J.: A safe computational framework for integerprogramming applied to Chv´atal’s conjecture (2020)4 Leon Eifler and Ambros Gleixner19. Espinoza, D.G.: On Linear Programming, Integer Programming and CuttingPlanes. Ph.D. thesis, Georgia Institute of Technology (2006)20. Faure, G., Nieuwenhuis, R., Oliveras, A., Rodr´ıguez-Carbonell, E.: Sat mod-ulo the theory of linear arithmetic: Exact, inexact and commercial solvers. In:Kleine B¨uning, H., Zhao, X. (eds.) Theory and Applications of Satisfiability Test-ing – SAT 2008. pp. 77–90. Springer Berlin Heidelberg, Berlin, Heidelberg (2008)21. Gamrath, G., Anderson, D., Bestuzheva, K., Chen, W.K., Eifler, L., Gasse, M.,Gemander, P., Gleixner, A., Gottwald, L., Halbig, K., Hendel, G., Hojny, C., Koch,T., Bodic, P.L., Maher, S.J., Matter, F., Miltenberger, M., M¨uhmer, E., M¨uller,B., Pfetsch, M., Schl¨osser, F., Serrano, F., Shinano, Y., Tawfik, C., Vigerske, S.,Wegscheider, F., Weninger, D., Witzig, J.: The SCIP Optimization Suite 7.0. ZIB-Report 20-10, Zuse Institute Berlin (2020)22. Gleixner, A., Hendel, G., Gamrath, G., Achterberg, T., Bastubbe, M., Berthold, T.,Christophel, P.M., Jarck, K., Koch, T., Linderoth, J., L¨ubbecke, M., Mittelmann,H.D., Ozyurt, D., Ralphs, T.K., Salvagnin, D., Shinano, Y.: MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming Library. MathematicalProgramming Computation (2020), accepted for publication23. Gleixner, A., Steffy, D.E.: Linear programming using limited-precision oracles. Mathematical Programming , 525–554 (2020).https://doi.org/10.1007/s10107-019-01444-624. Gleixner, A., Steffy, D.E., Wolter, K.: Iterative refinement for linearprogramming. INFORMS Journal on Computing (3), 449–464 (2016).https://doi.org/10.1287/ijoc.2016.069225. Gottwald, L.: PaPILO — Parallel Presolve for Integer and Linear Optimization.https://github.com/lgottwald/PaPILO (accessed September 9, 2020)26. Granlund, T., Team, G.D.: GNU MP 6.0 Multiple Precision Arithmetic Library.Samurai Media Limited, London, GBR (2015)27. Higham, N.J.: Accuracy and Stability of Numerical Algorithms. Society for In-dustrial and Applied Mathematics, Philadelphia, PA, USA, second edn. (2002).https://doi.org/10.1137/1.978089871802728. Kenter, F., Skipper, D.: Integer-programming bounds on pebbling num-bers of cartesian-product graphs. In: Kim, D., Uma, R.N., Zelikovsky, A.(eds.) Combinatorial Optimization and Applications. pp. 681–695 (2018).https://doi.org/10.1007/978-3-030-04651-4 4629. Lancia, G., Pippia, E., Rinaldi, F.: Using integer programming to search for coun-terexamples: A case study. In: Kononov, A., Khachay, M., Kalyagin, V.A., Parda-los, P. (eds.) Mathematical Optimization Theory and Operations Research. pp.69–84 (2020). https://doi.org/10.1007/978-3-030-49988-430. de Moura, L., Bjørner, N.: Z3: An Efficient SMT Solver. In: Ramakrishnan, C.R.,Rehof, J. (eds.) Tools and Algorithms for the Construction and Analysis of Sys-tems. pp. 337–340 (2008). https://doi.org/10.1007/978-3-540-78800-3 2431. Neumaier, A., Shcherbina, O.: Safe bounds in linear and mixed-integer program-ming. Math. Program. , 283–296 (2002). https://doi.org/10.1007/s10107-003-0433-332. Pulaj, J.: Cutting planes for families implying Frankl’s conjecture. Mathematicsof Computation (322), 829–857 (2020). https://doi.org/10.1090/mcom/346133. Steffy, D.E., Wolter, K.: Valid linear programming bounds for exact mixed-integer programming. INFORMS Journal on Computing (2), 271–284 (2013).https://doi.org/10.1287/ijoc.1120.0501 Computational Status Update for Exact Rational MIP 1534. Wetzler, N., Heule, M.J.H., Hunt, W.A.: DRAT-trim: Efficient checking andtrimming using expressive clausal proofs. In: Sinz, C., Egly, U. (eds.) The-ory and Applications of Satisfiability Testing – SAT 2014. pp. 422–429 (2014).https://doi.org/10.1007/978-3-319-09284-3 3135. Wilken, K., Liu, J., Heffernan, M.: Optimal instruction schedulingusing integer programming. SIGPLAN Not.35