Using Matching to Detect Infeasibility of Some Integer Programs
aa r X i v : . [ c s . D S ] M a r USING MATCHING TO DETECT INFEASIBILITY OF SOME INTEGER PROGRAMS
S.J. GISMONDI ∗ , E.R. SWART † Abstract.
A novel matching based heuristic algorithm designed to detect specially formulated infeasible { , } IPs ispresented. The algorithm’s input is a set of nested doubly stochastic subsystems and a set E of instance defining variables setat zero level. The algorithm deduces additional variables at zero level until either a constraint is violated (the IP is infeasible),or no more variables can be deduced zero (the IP is undecided). All feasible IPs, and all infeasible IPs not detected infeasibleare undecided. We successfully apply the algorithm to a small set of specially formulated infeasible { , } IP instances of theHamilton cycle decision problem. We show how to model both the graph and subgraph isomorphism decision problems for inputto the algorithm. Increased levels of nested doubly stochastic subsystems can be implemented dynamically. The algorithm isdesigned for parallel processing, and for inclusion of techniques in addition to matching.
Key words. integer program, matching, permutations, decision problem
MSC Subject classifications.
1. Introduction.
We present a novel matching based heuristic algorithm deigned to detect speciallyformulated infeasible { , } IPs. It either detects an infeasible IP or exits undecided. It does not solve an IP.We call it the triple overlay matching based closure algorithm (the algorithm). Input to the algorithm is anIP whose constraints are a set of nested doubly stochastic boolean subsystems [12] together with a set E ofinstance defining variables set at zero level. The IP’s solution set is a subset of the set of n ! n x n permutationmatrices P , written as n x n block permutation matrices Q each with block structure P . The algorithm is apolynomial time search that deduces additional variables at zero level via matching until either a constraintis violated in which case the IP is infeasible, or we can go no further in which case the IP is undecided. Ifthe IP is decided infeasible, a set of variables deduced to be at zero level can be used to test and displaya set of violated constraints. If the IP is undecided, additional variables deduced zero can be added to E ,and nothing more can be concluded. While some infeasible IPs may fail to be detected infeasible (not yetfound), feasible IPs can only fall in the undecided category.In section 2 we present the generic IP required as input to the algorithm, and we view the set of allsolutions of the IP as an n x n block permutation matrix Q whose components are { , } variables. Each n x n block ( u, i ) is n x n permutation matrix P where block ( u, i ) contains p u,i in position ( u, i ). An instanceis modelled by setting certain variables of Q to zero level. In sections 3, 4 and 5, we present the algorithm,an application / matching model of the Hamilton cycle decision problem (HCP), empirical results and twoconjectures. In section 6, we present generalizations of the algorithm, matching models for both the graphand subgraph isomorphism decision problems, and other uses. We also propose more development. Itssuccess, effectiveness and practicality can then be evaluated in comparison to other algorithms. We inviteresearchers to collaborate with us. Contact the corresponding author for FORTRAN code.The ideas presented in this paper originated from a polyhedral model of cycles not in graphs [9]. At thattime we thought about how to recognize the Birkhoff polytope as an image of a solution set of a compactformulation for non-Hamiltonian graphs. We’ve accomplished part of that goal in this paper. That is, theconvex hull of all excluded permutations for infeasible IPs is the Birkhoff polytope, and its easy to builda compact formulation from E . In this paper, over 2,100 non-Hamiltonian graphs ranging from 10 - 104vertices are correctly decided as infeasible IPs. None failed that are not reported. Although counterexamplessurely exist, we believe there is an insightful theory to be discovered that explains these early successes.
2. About Specially Constructed { , } IPs and Terminology.
Imagine a { , } integer programmodelled such that P is a solution if and only if the integer program is feasible e.g. matching. Also imaginean arbitrary set of instance defining constraints of the form p u,i + p v,j ≤
1. It’s not obvious how to applymatching to help in its solution. Now imagine that we create a compact formulation whose solution set isisomorphic (i.e. equal under an orthogonal projection), where we convert each linear constraint into all of itsinstantiated discrete states via creation of a set of discrete { , } variables. Then it becomes easy to exploitmatching. Hence the algorithm. ∗ University of Guelph, Canada, Email: [email protected] (Corresponding Author) † Kelowna, British Columbia, Canada, Email: [email protected]
Ted and I dedicate this paper to the late Pal Fischer (16/11/2016) - friend, colleague and mentor. ode the IP above so that each of the instance defining constraints is a set of two distinct componentsof P { p u,i , p v,j } , interchangeably playing the role of a { , } variable for which { p u,i , p v,j } = 0 if and only if p u,i = p v,j = 0, or p u,i = 1 and p v,j = 0, or p u,i = 0 and p v,j = 1. Create an instance of the IP by creatingan instance of exclusion set E whose elements are the set of these { p u,i , p v,j } . If there exists P satisfying { p u,i , p v,j } = 0 for all { p u,i , p v,j } ∈ E , then P is a solution of the IP. Otherwise P satisfies { p u,i , p v,j } = 1for at least one { p u,i , p v,j } ∈ E and P is excluded from the solution set of the IP. We view elements of E ascoding precisely the set of permutation matrices excluded from the solution set of the IP. That is, E excludesthe union of sets of ( n − P , each set satisfying { p u,i , p v,j } =1, for each { p u,i , p v,j } ∈ E . An example ofthe modelling technique needed to create E is presented in section 4, originally presented in [9]. We excludethese permutation matrices by setting { p u,i , p v,j } = 0 for each { p u,i , p v,j } ∈ E .The complement of exclusion set E , with respect to all { p u,i , p v,j } is called available set V . The IP isfeasible if and only if there exists P whose set of n ( n − distinct pairs of components { p u,i , p v,j } that satisfy p u,i = p v,j = 1 and define P are in V . P is said to be covered by V if there exists a subset of n ( n − { p u,i , p v,j } ∈ V such that p u,i = p v,j = 1 defines P and each { p u,i , p v,j } participates in P ’s cover. Definition 2.1.
Clos ( E ) (closed exclusion set E ) is the set of all { p u,i , p v,j } not participating in anycover of any P . Note that { p u,i , p v,j } ∈ E is code for the set of ( n − p u,i = p v,j = 1.Clearly if clos ( E ) is such that all n ! permutation matrices are accounted, then there is no P covered by V i.e. V is empty. Definition 2.2.
Open ( V ) (open available set V ) is the complement of clos ( E ) w.r.t. all { p u,i , p v,j } ,the set of all { p u,i , p v,j } participating in a cover of at least one P . Theorem 2.1.
The IP is infeasible if and only if open ( V ) = ∅ . System 1: P i p u,i = 1 , u = 1 , , ..., n P u p u,i = 1 , i = 1 , , ..., nF or all u, i = 1 , , ..., n P j = i { p u,i , p v,j } = p u,i , v = 1 , , ..., n , v = u . P v = u { p u,i , p v,j } = p u,i , j = 1 , , ..., n , j = i . { p u,i , p v,j } ∈ E ⇒ Assign { p u,i , p v,j } = 0 .p u,i , { p u,i , p v,j } ∈ { , } Visualize system 1 in the form of n x n permutation matrix Q , n blocks of P . Block ( u, i ) contains p u,i in position ( u, i ), the remaining entries in row u and column i being zero. The rest of the entries inblock ( u, i ) have the form { p u,i , p v,j } , v = u, j = i . It’s assumed variables in Q have been initialized by E .Henceforth, we present the algorithm in terms of matrix Q . See Figure 2.1, an example of the general formof matrix Q for n = 4. For E = ∅ , the set of n ! n x n permutation matrices each written as a Q matrix i.e. in n x n block form, is the set of integer extrema of the solution set of system 1 § . See Figure 2.2, an exampleof an integer solution to system 1 in matrix Q form, for n = 4. p
11 0 0 0 0 p
12 0 0 0 0 p
13 0 0 0 0 p { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } p
21 0 0 0 0 p
22 0 0 0 0 p
23 0 0 0 0 p { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } p
31 0 0 0 0 p
32 0 0 0 0 p
33 0 0 0 0 p { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } { p p } p
41 0 0 0 0 p
42 0 0 0 0 p
43 0 0 0 0 p Fig. 2.1 . General Form of Matrix Q , n = 4 . § An integer solution of system 1 exists if and only if it’s an n x n permutation matrix in n x n block form [7].2 Fig. 2.2 . An Integer Solution to System 1 in Matrix Q Form, n = 4 .
3. About Triple Overlay Matching Based Closure.
We first present an overview of the algorithm,followed by the formal algorithm. Let E be given. Encode Q and create V . Rather that searchfor the existence of P covered by V , we attempt to shrink V so that { p u,i , p v,j } ∈ V if and only if { p u,i , p v,j } participates in a cover of at least one P . The algorithm deduces which { p u,i , p v,j } ∈ V do not participate inany cover of any P , removes them from V , and adds them to E . Its success depends upon whether or notit’s true that for infeasible IPs, when we initialize Q via E , it’s sufficient to deduce open ( V ) = ∅ ¶ . Whileit’s impossible for a feasible IP to yield open ( V ) = ∅ , infeasible IPs cause the algorithm to either deduceinfeasibility or exit undecided. We say undecided because although we deduce some of these { p u,i , p v,j } ∈ V that do not participate in any cover of any P , it’s not known if we deduce all of these { p u,i , p v,j } . Briefdetails about how the algorithm deduces variables at zero level in every solution of the IP now follow.The algorithm systematically tests a set of necessary conditions assuming a feasible IP each time a q u,i,v,j is set at unit level. That is, if p u,i = p v,j =1, blocks ( u, i ) and ( v, j ) are assumed to cover a match, a necessarycondition for the existence of a n x n block permutation matrix solution of the IP. But rather than test fora match covered by these two blocks, we exhaust all choices of a third variable common to these blocks, setat unit level, and test for the existence of a match covered by all three blocks. After exhausting all possiblechoices of a third ‡‡ variable, if no match exists, the given q u,i,v,j variable is deduced zero. Otherwise weconclude nothing. In both cases we continue on to the next variable not yet deduced zero. Eventually nomore variables can be deduced zero, none of the constraints appear violated and the IP is undecided; orenough of the variables are deduced zero such that a constraint is violated and the IP is infeasible. Interchangeably associate matrix Q with a { , } matrix that has entries at zero level where matrix Q has entries at zero level, and unit entrieswhere matrix Q has p u,i or { p u,i , p v,j } entries. We’ll now reference { , } variables q u,i,u,i and q u,i,v,j . A unitentry in the u th row and i th column of non-empty block ( u, i ) represents variable p u,i . The remaining unitentries in the v th row and j th column of block ( u, i ) with u = v and i = j can be regarded as representing p v,j variables (which is what they really do represent in the case of a { , } solution) and, they can also beregarded as representing { p u,i , p v,j } variables. We think of this associated matrix in terms of patterns in Q that cover n x n block permutation matrices, and then we’ll exploit matching. Definition 3.1.
Match(*) is a logical function. Input * is an n x n { , } matrix. Row labels are viewedas vertices 1 through n (set A ), and column labels are viewed as vertices n + 1 through n (set B ). Match(*) ¶ In earlier work [12], we create an equivalence class, the set of all possible V ’s none of which cover any P , whose classrepresentative is ∅ . ‡‡ Hence the term triple overlay. Every variable not deduced zero participates in a match in an overlay of three blocks of Q . There exists quadrupal, quintuple overlay through to exhaustion where the algorithm tests factorial numbers of n ( n − eturns TRUE if there exists a match between A and B . Otherwise Match(*) returns FALSE. Definition 3.2.
Overlay(* ,* ) is a binary AND function applied component-wise to two n x n { , } matrices. Its output is a { , } matrix. We loosely use the terms double and triple overlay in place ofOverlay(* ,* ) and Overlay(Overlay(* ,* ),* ) etc. Definition 3.3.
Check RowsColumns Q is a routine that returns TRUE if a row or column in ma-trix Q is all 0, in which case the algorithm terminates and the graph is deduced infeasible. Otherwise Check RowsColumns Q returns FALSE. In our FORTRAN implementation of the algorithm, before testingfor termination, we also implement Boolean closure within and between blocks in Q . This efficiently deducessome of the non-zero components of Q to be at zero level and we note significant speed increases. Note that Boolean closure in
Check RowsColumns Q can be replaced by LP. Temporarily set a non-zero component of matrix Q to unit level and check for infeasibility subject to doubly stochastic constraintsof matrix Q . Infeasibility implies that the component can be set to zero level.Whenever the algorithm exits undecided, then for every non-zero q u,i,v,j , there exists a match in a tripleoverlay of blocks ( u, i ), ( v, j ) and at least one ( w, k ) block. The IP is then not deduced infeasible and we callthe corresponding matrix Q the non-empty triple overlay closure of the IP. Otherwise the algorithm exitsand the IP is deduced infeasible i.e. open ( V ) is deduced to be empty. Input: { open ( V ) ← V , Q } Output: { open ( V ), decision } if Check RowsColumns Q
EXIT { open ( V ), infeasible } ; CONTINUE TRIPLE CLOSURE ; oldQ ← Q ; for u, i = 1 , , ..., n ; and q u,i,u,i = 0 doif ˜ Match ( Q ( u, i )) then q u,i,u,i ← for v, j = 1 , , ..., n ; and u = v and i = j and q u,i,v,j = 0 do q u,i,v,j ← q v,j,u,i ← open ( V ) ← open ( V ) \{ p u,i , p v,j }\{ p v,j , p u,i } ; endif Check RowsColumns Q
EXIT { open ( V ), infeasible } ; ⇒ NEXT i ; endfor v, j = 1 , , ..., n ; and u = v and i = j and Q ( u, i ) v,j = 0 doif ˜ Match ( Overlay ( Q ( u, i ) , Q ( v, j ))) then q u,i,v,j ← q v,j,u,i ← open ( V ) ← open ( V ) \{ p u,i , p v,j }\{ p v,j , p u,i } ; ⇒ NEXT j ; end DoubleOverlay ← Overlay ( Q ( u, i ) , Q ( v, j )); TRIPLE CLOSURE ; for w, k = 1 , , ..., n ; and u = w = v , i = k = j and DoubleOverlay w,k = 0 doif ˜ Match ( Overlay ( DoubleOverlay, Q ( w, k ))) then DoubleOverlay w,k ← ⇒ TRIPLE CLOSURE ; endendif ˜ Match ( DoubleOverlay ) then q u,i,v,j ← q v,j,u,i ← open ( V ) ← open ( V ) \{ p u,i , p v,j }\{ p v,j , p u,i } ; endendendif oldQ = Q ⇒ CONTINUE TRIPLE CLOSURE ; EXIT { open ( V ), undecided } ; Algorithm 1:
The Triple Overlay Matching Based Closure Algorithm . Application to the HCP. Let G be an n + 1 vertex graph also referenced by its adjacency matrix.We model the HCP for simple, connected 3-regular graphs, as do others [5, 10, 3], called the 3HCP. The 3HCP is a wellknown decision problem and is NP -complete [6]. G is 3-colourable (edge) if G is Hamiltonian, and since3-regular graphs are either 3 or 4-colourable, it follows that if G is 4-colourable then G is non-Hamiltonian.These graphs were initially studied by Peter Tait in the 1880s (named Snarks by Martin Gardner in 1976).Tait conjectured (1884) that every 3-connected, planar, 3-regular graph has a Hamilton cycle, later disprovedby Tutte in 1946 via construction of a 46 vertex counterexample. This was a significant conjecture, and hadit been true, it implied the famous 4-colour theorem. These ideas are summarized in the figure below.All Simple, Connected, 3-Regular GraphsHamiltonian Graphs3-Colourable non-Hamiltonian Graphsnon-Snarks3-Colourable Tutte’s Counterexample
Snarks4-Colourable
Fig. 4.1 . Classification of Simple, Connected, 3-Regular Graphs
Regard paths of length n + 1 that start and stop at thesame vertex and pass through every vertex, as directed graphs on n + 1 vertices. For undirected graphs,every cycle is accompanied by a counter-directed companion cycle. No matter that G is Hamiltonian or non-Hamiltonian, assign vertex n + 1 as the origin and terminal vertex for all cycles, and assign each directedHamilton cycle to be in correspondence with each n x n permutation matrix P where p u,i = 1 if and only ifthe i th arc in a cycle enters vertex u . We encode each cycle as a permutation of vertex labels. For example,the path sequence { } is code for the first arc enters vertex 2, the second arc enters vertex 4and so on. Since p n +1 ,n +1 = 1 for all cycles by definition, it’s sufficient to code cycles as n x n permutationmatrices. Note that an arc is directed, and an edge is undirected i.e. the pair of arcs ( u, i ) & ( i, u ) is theedge ( u, i ). Unless otherwise stated, all graphs are simple, connected and 3-regular.We next encode graph instance G by examining G ’s adjacency matrix, adding to E all pairs of compo-nents of P , { p u,i , p v,j } that encode paths of length j − i ( j > i ) from vertex u to vertex v in cycles not in G .This encodes precisely the set of cycles not in G i.e. every cycle not in G uses at least one arc not in G .See the algorithm (How to Initialize Exclusion Set E ) below and recall that G is connected. For arc( u, v ) not in G we can assign { p u,i , p v,i +1 } ∈ E . But we also compute additional { p u,i , p v,i + m } whenever it’spossible to account for no paths of length m in G , from vertex u to vertex v . We do this by implementingDijkstras algorithm with equally weighted arcs to find minimal length paths between all pairs of vertices,coded to return m = n + 1 if no path exists. We account for all paths of length one not in G (arcs not in G ), and, all paths of length two not in G by temporarily deleting the arc between adjacent vertices.Begin as follows. If u is adjacent to v then temporarily delete arc ( u , v ) and apply Dijkstras algorithm todiscover a minimal path of length m >
1, a simple ‘speed-up’. No paths of length k can exist, k = 1 , ..., m − { p u,i , p v,i + k } are discovered that 1) for k = 1 and u not adjacent to v correspond with arcs in cycles notin G , and 2) for k > k in cycles not in G . Accounting for all arcs not in G is sufficient to model precisely all cycles not in G , and we account for paths in cycles not in G to bolster E .Two special cases arise. Case 1. Last arc in cycle : Recall that every n + 1 th arc in a cycle enters vertex n + 1 by definition. Therefore observe arcs ( u , n + 1) not in G , temporarily deleted or otherwise, notinghow corresponding sets of cycles not in G can be encoded by permutation matrices for which the n th arcin a cycle enters vertex u i.e. p u,n = 1. This is the case for k =1, and u not adjacent to v when Dijkstrasalgorithm returns m = 2. If Dijkstras algorithm returns m = 3, then again for k =1 and if u is not adjacentto v set p u,n = 1, and for k =2, no paths of length two exist and these sets of cycles not in G can be encodedby permutation matrices for which the n − th arc in a cycle enters vertex u i.e. p u,n − = 1. Continuing in his way, encode all possible n + 1 − k th arcs in cycles not in G , in paths of length k not in G , to enter vertex u i.e. p u,n +1 − k = 1, k = 1 , , ..., m − Case 2. First arc in cycle : Recall every first arc in every cycle exitsvertex n + 1. Observe and code all arcs ( n + 1, v ) in cycles not in G , in paths of length k not in G by codingall possible k th arcs to enter vertex v i.e. p v,k =1, k = 1 , , ..., m − G uses at leastone arc not in G e.g. ( u, v ). The complete set of permutation matrices corresponding to these cycles notin G are characterized by {{ p u,l , p v,l +1 } , l = 1 , , ..., n − } , added to E . By indexing l , arc ( u, v ) can playthe role of ( n -1) sequence positions in disjoint sets of cycles not in G . Considering all O ( n ) arcs not in G ,each playing the role of all O ( n ) possible sequence positions, it’s possible to construct the set of permutationmatrices corresponding to the set of cycles not in G accounted by the union of O ( n ) { p u,l , p v,l +1 } , added to E . We generalize this idea via Dijkstras algorithm and account for some sets of paths of length k not in G .Recall that G is strongly connected. But if an arc is temporarily deleted, it’s possible for no path toexist between a given pair of vertices. This useful information indicates that an arc is essential under theassumption of the existence of a Hamilton cycle that uses this arc. In case 1, this implies that a particular p u,n is necessary, and by integrality must be at unit level in every assignment of variables, assuming the graphis Hamiltonian (until deduced otherwise, if ever). Thus all other P ’s in the same row (and column) can beset at zero level. This is accounted for when we initialize E . Recall that m = n + 1 in the case that Dijkstrasalgorithm returns no minimal path. The k loop appends the necessary set of { p v,j , p u,n +1 − k } to E effectivelysetting variables in blocks ( u ,1,) through ( u , n −
1) at zero level. When implemented in the algorithm, p u,n must attain unit level via double stochastity, and this implies that the other P ’s in the same column arededuced to be at zero level. Similarily for Case 2. In the general case, it’s also possible for no path to existbetween a given pair of vertices ( u , v ) (when an arc is temporarily deleted). Under the assumption of theexistence of a Hamilton cycle, this arc is essential and can play the role of sequence position 2 through n -1and so in each case, all complementary row (and column) { p u,i , p v,j } are assigned to E . When implemented,a single { p u,i , p v,j } variable remains in each row and therefore is equated with that block’s p u,i variable via‘scaled’ double stochastity within the block i.e. rows and columns in the block sum to p u,i . Complementary { p u,i , p v,j } variables in the corresponding column are therefore set 0 in each block. Thus essential arcs alsocontribute to new information by adding their complementary row / column { p u,i , p v,j } to E .Finally, encode E into matrix Q i.e. assign q u,i,v,j = 0 for each { p u,i , p v,j } ∈ E , and then create V . nput: { Arc Adjacency matrix for G } Output: { E } E ← ∅ ;Case 1: for u = 1 , , ..., n do Arc ← G ( u, n + 1); G ( u, n + 1) ← m ← DijkstrasAlgorithm( G , u , n + 1); for k = Arc + 1 , Arc + 2 , ..., m − do E ← E ∪ { p v,j , p u,n +1 − k } , v = 1 , , ..., n, v = u ; j = 1 , , ..., n, j = n + 1 − k ; end G ( u, n + 1) ← Arc ; end Case 2: for v = 1 , , ..., n do Arc ← G ( n + 1 , v ); G ( n + 1 , v ) ← m ← DijkstrasAlgorithm( G , n + 1, v ); for k = Arc + 1 , Arc + 2 , ..., m − do E ← E ∪ { p v,k , p u,i } , u = 1 , , ..., n, u = v ; i = 1 , , ..., n, i = k ; end G ( n + 1 , v ) ← Arc ; end General Case. for u = 1 , , ..., n dofor v = 1 , , ..., n ; v = u do Arc ← G ( u, v ); G ( u, v ) ← m ← DijkstrasAlgorithm( G , u , v ); for k = Arc + 1 , Arc + 2 , ..., m − do E ← E ∪ { p u,l , p v,l + k } , l = 1 , , ..., n − k ; end G ( u, v ) ← Arc ; endendEXIT { E } ; Algorithm 2:
How to Initialize Exclusion Set E
5. Empirical Results and Two Conjectures.
Table 5.1 below lists some details of 25 applications(all 3-regular graphs) of the algorithm. Table 5.2 below lists some details of 20 applications (mostly 3-regular graphs) of an earlier version of the matching based closure algorithm called the WCA ∗∗ [12] (asubset from over 2,100 applications). For both algorithms, all of the graphs are decided non-Hamiltonianand no application of either algorithm to any other graphs failed that are not reported. In both tables, heading p u,i ( | V | ) is the count of non-zero p u,i variablesand the size of initial available set V (the number of non-zero q u,i,v,j components in Q ) after initializing E ,before implementing the algorithm. Note p u,i = q u,i,u,i . We only count q u,i,v,j i < j (distinct q u,i,v,j ).In Table 5.2, heading | open ( V ) | ≤ refers to an upper bound on | open ( V ) | for 11 selected graphs, eachmodified to include the cycle 1 − − ... − n − ( n + 1) −
1, simply to observe open ( V ). Two of these graphsare also hypohamiltonian. The count in parentheses is an upper bound on | open ( V ) | after removing a vertexand re-running the WCA. Conjecture 5.1.
Polynomial sized proof of membership of all n ( n − distinct { p u,i , p v,j } ∈ E existsfor all simple, connected, 3-regular, non-Hamiltonian graphs. Conjecture 5.2.
Triple overlay matching based closure deduces open ( V ) = ∅ for all simple, connected,3-regular, non-Hamiltonian graphs. ∗∗ The WCA is a breadth-first closure, exhausting the middle v, j loop before returning to label CONTINUE TRIPLECLOSURE. It’s followed by triple closure, also applied breadth-first, exhausting the interior w, k loop before returning to labelTRIPLE CLOSURE. Many more applications of Boolean closure across all of Q at many more intermediate steps are alsoimplemented, unlike triple overlay matching based closure as we have presented (although these checks can also be included).Block overlays are also restricted to be of the form Q ( u, i ) and Q ( v, j ), i < j . In this way we can solve problems in the 50-100vertex range. The WCA is designed to be parallelized, and the FORTRAN code is written for distributed computing.7 able 5.1 Applications of the Triple Overlay Matching Based Closure Algorithm
Name of Graph p u,i ( | V | )Petersen Snark 10 57 (858)3 Flower Snarks 12, 20, 28 87 (2,199), 271 (26,380), 567 (126,128)Tietzs Snark 12 87 (2,257)2 Blanusa Snarks 18, 26 223 (16,630), 495 (88,968)House of Graphs[1] 18 219 (16,262)A Loupekine Snark 22 345 (43,719)A Goldberg Snark 24 419 (65,711)10 House of Graphs[1] 26 ≈
500 ( ≈ , , Table 5.2
Applications of A Matching Based Closure Algorithm (WCA) [12]
Name of Graph p u,i ( | V | ) | open ( V ) | ≤ Petersen Snark 10 (15) 57 (858) 792Herschel Graph 11 (18) No 2-Factor 1,980A Kleetope 14 (36) 147 (8,166) 5,809 Matteo[4] ††
20 (30) 275 (26,148) 27,093 Coxeter 28 (42) 597 (136,599) 135,453(1,241) House of Graphs Barnette-Bosk-Lederberg ††
38 (57) 1077 (440,318) 96,834 A Hypohamiltonian 45 (70) 1,656 (1,109,738) 296,668 (29,724) Tutte ††
46 (69) 1,649 (1,060,064) 382,400A Grinberg Graph 46 (69) 1,737 (1,204,722) - Not run yet - Georges ††
50 (75) 2037 (1,701,428?) - Not run yet -Szekeres Snark 50 (75) 2045 (1,718,336) - Not run yet -Watkins Snark 50 (75) 2051(1,708,987) - Not run yet - Ellingham-Horton 54 (81) 2,315 (2,135,948) 1,045,041Thomassen 60 (99) 3,105 (4,071,600) - Not run yet -Meredith 70 (140) 4,221 (7,526,996) - Not run yet -A Flower Snark 76 (114) 4,851 (9,720,420) - Not run yet - Horton ††
96 (144) 8,205 (29,057,118) - Not run yet -A Goldberg Snark 104 (156) 9,339 (37,802,124) - Not run yet - Simple, connected, 3-regular, and 3-colourable. Hypohamiltonian. Confirmed existence of non-empty open ( V ) after removing a vertex and re-running the WCA. †† Historical Note: Ignoring the planarity condition on Tait’s conjecture, the Matteo graph [4] is the smallest non-planarcounterexample, while the Barnette-Bosk-Lederberg graph [13] is the smallest planar, 3-colourable, 3-connected counterex-ample to Tait’s conjecture (Tutte’s graph is a larger counterexample). We also note that the Georges graph is the smallestcounterexample to Tutte’s conjecture, and Horton’s graph was the first counterexample to Tutte’s conjecture.8 . Discussion.6.1. About Practical Generalizations of the Algorithm.
The algorithm can be designed to invokearbitrary levels of overlay i.e. adaptive strategies that change the level of overlay if more depth is desired/ needed to deduce variables at zero level. But in order to make use of increased overlay, it’s necessary toadd more variables to retain information about tests for matching. For example, if we create a quadrupaloverlay version of the algorithm, we then introduce { , } { p u,i , p v,j , p w,k } variables and redefine system 1and matrix Q in terms of triply nested Birkhoff polyhedra. See the discussion in [7] for a description of thesepolyhedra as feasible regions of LP formulations (relaxed IPs). There exists a sequence of feasible regions incorrespondence with increasing levels of nested Birkhoff polyhedra whose end feasible region is the convexhull of the set of integer extrema of system 1. See [2] for a discussion of facet-inducing inequalities.The term closure has so far been reserved for deducing variables added to E by invoking the algorithm.But other polynomial time techniques can be used to deduce variables at zero level. For example, prior tomatching, we could implement LP and maximize each variable in system 1, and if its maximum is less thanunit level, the variable can be set zero. In our implementation, we use Boolean closure. See [12] for moredetails. We also note there exist entire conferences devoted to Matching Under Preferences [11]. Perhapsmany more innovative heuristics exist and can be included in the algorithm.The algorithm is designed for parallel processing. Each q u,i,v,j variable not yet deduced zero can betested independent of the others by making a copy of matrix Q and implementing the algorithm. If anindependent process deduces a q u,i,v,j variable at zero level, simply update the corresponding q u,i,v,j variablein each Q across all processes.For some applications, there exist model specific dependencies between variables i.e. undirected HCPimplies { p u,i , p v,j } = 0 if and only if { p u,n +1 − i , p v,n +1 − j } = 0. In this way we account for companion cycles. Exclusion set E is the focus of study. We propose to classifydifferent E by the pattern that remains in matrix Q after exit from the algorithm (up to isomorphism) i.e. Q covers the set of all possible solutions to the IP. It would be useful to know what kinds of E cause thealgorithm to generate Q as a minimal cover, since it then follows that the algorithm would decide feasibilityof the IP. Even if there exist classes of E for which infeasible IPs provably exit the algorithm infeasible, nomatter that Q is or is not a minimal cover, it still follows that the algorithm decides feasibility of the IP.We plan to investigate counterexamples via the matching model for HCP. Graph C7-21 (not 3-regular)fails an earlier version of the algorithm [12]. We will convert [3] C7-21 and study it as instance of 3HCP. We now present twomore matching models as applications for the algorithm ‡‡ . Q ’s components no longer have the interpretationas sequenced arcs in a cycle. Instead, let Q be an m x m block permutation matrix, whose blocks are m x m permutation matrices P . We note from [7] that F is a subgraph of G if and only if there exists permutationmatrix P such that P T GP covers F (and we add) if and only if Q · g covers f , where f and g are column vectorsof adjacency matrices F and G formatted as { F (1 , , F (1 , , ..., F (1 , m ) , F (2 , , F (2 , , ..., F ( m, m ) } and { G (1 , , G (1 , , ..., G (1 , m ) , G (2 , , G (2 , , ..., G ( m, m ) } .We now model both the graph and subgraph isomorphism decision problems as matching models, thesingle difference being that in the case of graph isomorphism, more information appears to be added to E . First note that Q · g covers f means Q · g is required to place ones in the same positions as those of f . So for each of these equations, a subset of row components sum to one implying that the complementrow components must therefore all be set at zero level. Add them to E . This completes the subgraphisomorphism matching model and only part of the graph isomorphism model. For graph isomorphism, cover means equality . The remaining equations to be satisfied are those for which Q · g is required to place zeroesin the same positions as those of f . So for each of these equations, a subset of row components sum to zeroimplying that these row components must therefore all be set at zero level. Add them to E . This completesthe graph isomorphism matching model. We originally intended for the algorithm to decidefeasibility of a matching model. When it decides infeasibility, the algorithm has served its purpose. Otherwiseit’s not known if the model is feasible or infeasible. We note that open ( V ) is a refined cover of possiblesolutions to the IP and we believe that this is useful. We propose that the algorithm can be developed as ‡‡ See [8] for more information about these modelling techniques.9 art of other search based algorithms, either to provide refined information prior to a search, or incorporatedand updated alongside a search based algorithm to provide more information during a search.There is one last thought about an academic use for the algorithm. Suppose we are given a correctlyguessed infeasible IP, and the algorithm exits undecided. We can attribute the failure to E as lacking thenecessary / right kind of { p u,i , p v,j } that could induce closure. We could then theoretically augment E with additional { p u,i , p v,j } until we deduce infeasibility, and discover extra information needed to generate open ( V ) = ∅ . So for application, when the algorithm gets stuck and open ( V ) = ∅ , simply augment E withadditional { p u,i , p v,j } ∈ open ( V ), and test if open ( V ) becomes empty. While it might be difficult to guessminimal sized sets of additional { p u,i , p v,j } , if they can be guessed, we will then have articulated what criticalinformation is needed to solve the problem. Of course it’s not known if these additional { p u,i , p v,j } can beefficiently computed or validated as members in E . See conjecture 5.1.
7. Acknowledgements and Dedication.
Thank you to: Adrian Lee for preparing and running someof the examples presented in Tables 5.1 and 5.2; Nicholas Swart for testing and implementing 2000 31-vertexnon-Hamiltonian graphs in 2013; Catherine Bell for suggestions and contributions early on in this project.We dedicate this paper to the late Pal Fischer (16/11/2016). For Ted, Pal was a colleague, co-authorand friend. For myself (Gismondi), Pal taught me analysis, an understanding of convex polyhedra, and laterbecame a colleague. Ted and I both already miss him very much.
REFERENCES[1]
G. Brinkmann, K. Coolsaet, J. Goedgebeur, and H. Melot , House of Graphs: a database of interesting graphs ,Discrete Applied Mathematics. Available at http://hog.grinvin.org, 161 (2013), pp. 311–314.[2]
M. Demers and S.J. Gismondi , Enumerating facets of Q / , Util. Math., 77 (2008), pp. 125–134.[3] V. Ejov, M. Haythorpe, and S. Rossomakhine , A Linear-size Conversion of HCP to 3HCP , Australasian Journal ofCombinatorics, 62 (2015), pp. 45–58.[4]
Mathematics Stack Exchange , graph6 string: Sspp?wc g ?w?o???i???ag?bo?g. Retrieved May 7, 2016 ,http://math.stackexchange.com/questions/367671/smallest-nonhamiltonian-3-connected-graph-with-chromatic-index-3,(2016).[5] J.A. Filar, M. Haythorpe, and S. Rossomakhine , A new heuristic for detecting non-hamiltonicity in cubic graphs ,Computers & Operations Research, 64 (2015), pp. 283–292.[6]
M.R. Gary, D.S. Johnson, and R.E. Tarjan , The planar hamilton circuit problem is np-complete , SIAM J. Comput.,5 (1976), pp. 704–714.[7]
S.J. Gismondi , Subgraph isomorphism and the Hamilton tour decision problem using a linearized form of
P GP t , Util.Math., 76 (2008), pp. 229–248.[8] , Modelling decision problems via birkhoff polyhedra , Journal of Algorithms and Computation, 44 (2013), pp. 61–81.[9]
S.J. Gismondi and E.R. Swart , A model of the coNP-complete non-Hamilton tour decision problem , Math. Prog. Ser.A, 100 (2004), pp. 471–483.[10]
M. Haythorpe , FHCP challenge set. Retrieved July 16, 2016 , http://fhcp.edu.au/fhcpcs, (2015).[11]
Microsoft Research Lab. New England , Match-up 2017, Cambridge, MA, USA
E.R. Swart, S.J. Gismondi, N.R. Swart, C.E. Bell, and A. Lee , Deciding graph non-hamiltonicity via a closurealgorithm , Journal of Algorithms and Computation, 48 (2016), pp. 1–35.[13]