Solving Packing Problems with Few Small Items Using Rainbow Matchings
Max Bannach, Sebastian Berndt, Marten Maack, Matthias Mnich, Alexandra Lassota, Malin Rau, Malte Skambath
SSolving Packing Problems with Few Small ItemsUsing Rainbow Matchings
Max Bannach
Institute for Theoretical Computer Science, Universität zu Lübeck, Lübeck, [email protected]
Sebastian Berndt
Institute for IT Security, Universität zu Lübeck, Lübeck, [email protected]
Marten Maack
Department of Computer Science, Universität Kiel, Kiel, [email protected]
Matthias Mnich
Institut für Algorithmen und Komplexität, TU Hamburg, Hamburg, [email protected]
Alexandra Lassota
Department of Computer Science, Universität Kiel, Kiel, [email protected]
Malin Rau
Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, 38000 Grenoble, [email protected]
Malte Skambath
Department of Computer Science, Universität Kiel, Kiel, [email protected]
Abstract
An important area of combinatorial optimization is the study of packing and covering problems, suchas
Bin Packing , Multiple Knapsack , and
Bin Covering . Those problems have been studiedextensively from the viewpoint of approximation algorithms, but their parameterized complexity hasonly been investigated barely. For problem instances containing no “small” items, classical matchingalgorithms yield optimal solutions in polynomial time. In this paper we approach them by their distance from triviality , measuring the problem complexity by the number k of small items.Our main results are fixed-parameter algorithms for vector versions of Bin Packing , MultipleKnapsack , and
Bin Covering parameterized by k . The algorithms are randomized with one-sidederror and run in time 4 k · k ! · n O (1) . To achieve this, we introduce a colored matching problem towhich we reduce all these packing problems. The colored matching problem is natural in itself andwe expect it to be useful for other applications. We also present a deterministic fixed-parameteralgorithm for Bin Packing with run time O (( k !) · k · k · n log( n )). Theory of computation → Fixed parameter tractability
Keywords and phrases
Bin Packing, Knapsack, matching, fixed-parameter tractable
Funding
Matthias Mnich : Supported by DFG grants MN 59/1-1 and MN 59/4-1.
Alexandra Lassota : Supported by DFG grant “Strukturaussagen und deren Anwendung in Scheduling-und Packungsprobleme”, JA 612/20-1
Acknowledgements
We want to thank Magnus Wahlström for helpful discussions. a r X i v : . [ c s . D S ] J u l Solving Packing Problems with Few Small Items Using Rainbow Matchings
An important area of combinatorial optimization is the study of packing and coveringproblems. Central among those is the
Bin Packing problem, which has sparked numerousimportant algorithmic techniques. In
Bin Packing , the goal is to pack a set of n itemswith sizes in (0 ,
1] into as few unit-sized bins as possible. Referring to its simplicity andvexing intractability, this problem has been labeled as “the problem that wouldn’t go away”more than three decades ago [11] and is still the focus of groundbreaking research today.Regarding approximability, the best known is an additive O (log OPT)-approximation, dueto Hoberg and Rothvoß [13, 17].A recent trend is to apply tools from parameterized complexity theory to problems fromoperations research [30]. For Bin Packing , a natural parameter is the minimum number ofbins. For this parameter, Jansen et al. [19] showed that this problem is W [1]-hard, even forinstances encoded in unary. Another natural parameter is the number d of distinct item sizes.For d = 2, a polynomial-time algorithm was discovered by McCormick et al. [28, 29] in the1990s. The complexity for all d ≥ Bin Packing can be solved in time(log ∆) d , where ∆ is the largest number in the input. A similar result was shown later byJansen and Klein [18]. Neither the algorithm by Goemans and Rothvoß nor the algorithmby Jansen and Klein are fixed-parameter algorithms for parameter d , which would requirethe algorithm to run in time f ( d ) · n O (1) for some computable function f .In light of these daunting results, we propose another natural parameter for Bin Packing .This parameter is motivated by the classical approach of parameters measuring the distancefrom triviality —a concept that was first proposed by Niedermeier [32, Sect. 5.4]. Roughlyspeaking, this approach measures the distance of the given instance from an instancewhich is solvable in polynomial time. This approach was already used for many differentproblems such as
Clique , Set Cover , Power Dominating Set , or
Longest CommonSubsequence [15]. Even one of the arguably most important graph parameters —treewidth—is often interpreted as the distance of a given graph from a tree [15]. Interestingly, thenumber of special cases where
Bin Packing can be solved in polynomial time is rather smalland the corresponding algorithms often rely on reductions to matching problems. In thiswork, we propose as novel parameter the distance from instances without small items . If nosmall item (with size at most 1 /
3) exists,
Bin Packing becomes polynomial-time solvablevia a reduction to the matching problem as each bin can contain at most two items. If thenumber of small items is unbounded, the problem becomes NP -hard.Two related problems to Bin Packing are
Bin Covering , where the number of coveredbins (containing items of total size at least 1) should be maximized, and
Multiple Knap-sack —a generalization of the
Knapsack problem. These problems have been studiedextensively (see the books by Gonzalez [14] and Kellerer et al. [22]). They share the
BinPacking trait that the efficiency of exact algorithms is hindered by the existence of smallobjects.In all mentioned problems, the items have a one-dimensional size requirement. As thisis too restrictive in many applications, so-called vector versions were proposed [1, 10]. Inthese versions, called
Vector Packing , Vector Covering , and
Vector MultipleKnapsack , each object has a d -dimensional size requirement and a set of objects can be Although the algorithm by Jansen and Klein is a fixed-parameter algorithm for the parameter | V I | – thenumber of vertices of the integer hull of the underlying knapsack polytope. . Bannach et al. 3 packed only if the size constraints are fulfilled in each dimension j = 1 , . . . , d . These problemsare much harder than their 1-dimensional version, e.g., Vector Packing does not admitan asymptotic polynomial time approximation scheme even for d = 2 [38]. For d -dimensionalproblems, we use the word vectors instead of items and containers instead of bins . What it means to be small.
In the one-dimensional version of
Vector Packing , thedefinition of a small item is quite natural: Every item with size less or equal than 1 / d -dimensional case.The requirement for large items is that only two of them can be placed inside the samecontainer. We call a subset of vectors V ⊆ V if no selection of three distinctvectors from V may be placed in the same container, i.e., for each u, v, w ∈ V there existsan ‘ ∈ { , . . . , d } such that u ‘ + v ‘ + w ‘ > T ‘ , where T ‘ is the capacity constraint of thecontainer in dimension ‘ . Let V L ⊆ V be a largest v ∈ V L large and call the vectors from the set V S = V \ V L small . Moreover, we define the number of small vectors in V as the cardinality of the complement of a largest 3-incompatibleset in V . Note that each 3-incompatible set V contains at most two vectors where all theentries have size of at most 1 /
3. Hence, for
Bin Packing the largest 3-incompatible setcorresponds to the set of large items plus at most two additional items.An important property of our definition is that the smallness of a vector is no longer anattribute of the vector itself, but needs to be treated with regard to all other vectors. Findinga set V S ⊆ V of small vectors of minimum cardinality might be non-trivial. We argue thatthis task is fixed-parameter tractable parameterized by |V S | . To find V S , we compute thelargest 3-incompatible set in V . The complement V S = V \ V L of a largest 3-incompatible setcan be found in time f ( |V S | ) · n O (1) by a reduction to . In this problem, acollection of sets S , . . . , S n ⊆ T with | S i | = 3 is given, and a set H ⊆ T with H ∩ S i for all i ∈ { , . . . , n } is sought. In Section 3, we present a reduction from the problem of findingthe sets V L and V S to an instance of the problem, which we can solve using: (cid:66) Fact 1 ([9, 33, 37]). can be solved in time 2 . k · n O (1) , where k is thesize of the solution. A corresponding solution can obtained within the same time. Our results.
We settle the parameterized complexity of the vector versions of
Bin Packing , Bin Covering , and
Multiple Knapsack parameterized by the number k of small objects.Our main results are randomized fixed-parameter algorithms, which solve all those problemsin time O ( k !) · n O (1) with one-sided error where n is the total number of objects. Note that Vector Multiple Knapsack is already NP -hard for d = 1 and p max ≤ n O (1) [12, 26]where p max denotes the largest profit of any object. (cid:73) Theorem 2.
Vector Packing and
Vector Covering can be solved by a randomizedalgorithm (with bounded false negative rate in n ) in time k · k ! · n O (1) . Vector MultipleKnapsack can be solved by a randomized algorithm (with bounded false negative rate in n + p max ) in time k · k ! · n O (1) · ( p max ) O (1) where p max is the largest profit of any vector. Our approach is to reduce the vector versions of packing and covering problems to a newmatching problem on edge-colored graphs, which we call
Perfect Over-the-RainbowMatching .In the
Perfect Over-the-Rainbow Matching problem, we are given a graph G . Eachedge e ∈ E ( G ) is assigned a set of colors λ ( e ) ⊆ C and for each color, there is a non-negativeweight γ ( e, c ). The objective is to find a perfect matching M of G and a function ξ : M → C Solving Packing Problems with Few Small Items Using Rainbow Matchings such that (i) χ ( e ) ∈ λ ( e ) for all e ∈ M (we can only choose from the assigned colors),(ii) S e ∈ M χ ( e ) = C (every color is present in the matching), and (iii) P e ∈ M γ ( e, χ ( e )) isminimized (the sum of the weights is minimized). The parameter for the problem is |C| , thenumber of different colors.We show how to solve Perfect Over-the-Rainbow Matching by an approach thatis based on the
Conjoining Matching problem. The
Conjoining Matching problemwas proposed by Sorge et al. [36], who asked whether it is fixed-parameter tractable. Thequestion was resolved independently by Gutin et al. [16], and by Marx and Pilipczuk [27],who both gave randomized fixed-parameter algorithms. Based on both results, we derive: (cid:73)
Theorem 3.
There is a randomized algorithm (with bounded false negative rate in n + ‘ )that solves Perfect Over-The-Rainbow Matching in time | C | · n O (1) · ‘ O (1) . This algorithm forms the backbone of our algorithms for
Vector Packing , VectorCovering , and
Vector Multiple Knapsack .Whether there is a deterministic fixed-parameter algorithm for
Conjoining Matching remains a challenging question, as also pointed out by Marx and Pilipczuk [27]. For someof the problems that can be solved by the randomized algebraic techniques of Mulmeley,Vazirani and Vazirani [31], no deterministic polynomial-time algorithms have been found,despite significant efforts. The question is whether the use of such matching algorithms isessential for
Conjoining Matching , or can be avoided by a different approach.We succeed in circumventing the randomness of our algorithm in the 1-dimensional caseof
Bin Packing . Namely, we develop another, deterministic algorithm for
Bin Packing , forwhich we prove strong structural properties of an optimal solution; those structural insightsmay be of independent interest. (cid:73)
Theorem 4.
Bin Packing can be solved deterministically in time O (( k !) · k · k · n log( n )) . Related Work.
The class of small items, their relation to matching problems, and specialinstances without small items have been extensively studied in the literature: Shor [34, 35]studies the relation between online bin packing, where the items are uniformly randomlychosen from (0 , Vector Packing and show thattheir strategy is optimal for instances that contain at most two small items. Kenyon [23]studies the expected performance ratio of the
Best-Fit algorithm for
Bin Packing on aworst-case instance where the items arrive in random order. To prove an upper bound on theperformance ratio, she classifies items into small items (size at most 1 / / / /
3) [23]. Kuipers [24] studiesso-called bin packing games where the goal is to share a certain profit in a fair way betweenthe players controlling the bins and players controlling the items. He only studies instanceswithout small items and shows that every such instances has a non-empty ε -core (a way ofspreading the profits relatively fair) for ε ≥ /
7. Babel et al. [2] present an algorithm withcompetitive ratio 1 + 1 / √ Bin Packing forbids to pack more then k different items into a single bin. Thespecial case k = 2 corresponds to instances without small items and can be solved in time n O (1 /ε ) for bins of size 1 + ε [8]. Finally, Bansal et al. [3] study approximation algorithms . Bannach et al. 5 for Vector Packing . To obtain their algorithms, they present a structural lemma thatstates that any solution with m bins can be turned into a solution with ( d + 1) m/ Structure of the document.
In Section 2, we briefly introduce randomized and parameter-ized algorithms and we define the problems studied in this work with their correspondingparameters. We use Section 3 to show the parameterized reductions from the packing andcovering problems to the
Perfect Over-the-Rainbow Matching problem. Section 4contains the parameter preserving transformation of the
Perfect Over-the-RainbowMatching problem to
Conjoining Matching and the resulting parameterized algorithms.As the algorithm for
Perfect Over-the-Rainbow Matching is randomized, so are thealgorithms for the covering and packing problems. In Section 5, we give a deterministicparameterized algorithm for the classical 1-dimensional version of
Bin Packing . Lastly, wesummarize our results and state some open questions for further possible research.
We give a short introduction to parameterized and randomized algorithms, and refer tothe standard textbooks for details [6, 32]. Afterwards, we introduce the packing problemsformally. Finally, we define our auxiliary matching problem.
Parameterized Algorithms. A parameterized problem is a language L ⊆ { , } ∗ × N wherethe second element is called the parameter . Such a problem is fixed-parameter tractable ifthere is an algorithm that decides whether ( x, k ) is in L in time f ( k ) · | x | c for a computablefunction f and constant c . A parameterized reduction from a parameterized problem L toanother one L is an algorithm that transforms an instance ( x, k ) into ( x , k ) such that(i) ( x, k ) ∈ L ⇔ ( x , k ) ∈ L , (ii) k ≤ f ( k ), and (iii) runs in time f ( k ) · | x | c . Randomized Algorithms. A randomized algorithm is an algorithm that explores some of itscomputational paths only with a certain probability. A randomized algorithm A for a decisionproblem L has one-sided error if it either correctly detects positive or negative instanceswith probability 1. It has a bounded false negative rate if Pr[ A ( x ) = “no” | x ∈ L ] ≤ / | x | c ,that is, it declares a “yes”-instance as a “no”-instance with probability at most 1 / | x | c . Allrandomized algorithms in this article have bounded false negative rate. Packing and Covering Problems.
In the
Vector Packing problem we aim to pack a set V = { v , . . . , v n } ⊆ Q d ≥ of vectors into the smallest possible number of containers, whereall containers have a common capacity constraint T ∈ Q d ≥ . Let v j ∈ V be a vector. We Solving Packing Problems with Few Small Items Using Rainbow Matchings use v ‘j to denote the ‘ th component of v j and T ‘ to denote the ‘ th constraint. A packing isa mapping σ : V → N > from vectors to containers. It is feasible if all containers i ∈ N > meet the capacity constraint, that is, for each ‘ ∈ { , . . . , d } it holds that P v j ∈ σ − ( i ) v ‘j ≤ T ‘ .Using as few containers as possible means to minimize max { σ ( v j ) | v j ∈ V} .In the introduction we already discussed what it means to be “small”. We expect onlyfew small items, so we consider this quantity as parameter for Vector Packing : Vector Packing
Parameter:
Number k of small vectors Input:
A set V = { v , . . . , v n } ⊆ Q d ≥ vectors and capacity constraints T ∈ Q d ≥ . Task:
Find a packing of V into the smallest number of containers. The 1-dimensional case of the problem is the
Bin Packing problem. There, vectors arecalled items , their single component size and the containers bins . In contrast to the multi-dimensional case, we are now given a sequence of items, denoted as I . Bin Packing
Parameter:
Number k of small items Input:
A sequence I = ( i , . . . , i n ) of n items such that i j ∈ Q ≥ for each i j ∈ I ,and a capacity constraint T ∈ Q ≥ . Task:
Find a packing of I into the smallest number of bins. Another related problem is
Vector Covering , where we aim to cover the containers. Wesay a packing σ : V → N > covers a container i if P v j ∈ σ − ( i ) v ‘j ≥ T ‘ for each component ‘ ∈ { , . . . , d } . The objective is to find a packing σ that maximizes the number of coveredcontainers, that is, we want to maximize |{ i ∈ N | P v ∈ σ − ( i ) v ‘j ≥ T ‘ for all ‘ ∈ { , . . . , d }}| . Vector Covering
Parameter:
Number k of small vectors Input:
A set V = { v , . . . v n } of n vectors of dimension d such that v j ∈ Q d ≥ for each v j ∈ V , as well as capacity constraints T ∈ Q d ≥ . Task:
Find a covering of V into the largest number of containers. The last problem we study is the
Vector Multiple Knapsack problem: Here a packinginto a finite number of C many containers is sought. Therefore, not all vectors may fit intothem. We have to choose which vectors we pack considering that each vector v j ∈ V has anassociated profit p ( v j ) ∈ N ≥ . A packing of the vectors is a mapping σ : V → { , . . . , C }∪{⊥} such that P v j ∈ σ − ( i ) v ‘j ≤ T ‘ holds for all i ∈ { , . . . , C } and ‘ ∈ { , . . . , d } , which means nocontainer is over-packed. The objective is to find a packing with a maximum total profit ofthe packed items, that is, we want to maximize P Ci =1 P v ∈ σ − ( i ) p ( v ). Vector Multiple Knapsack
Parameter:
Number k of small vectors Input:
A set V = { v , . . . v n } with v i ∈ Q d ≥ , a profit function p : V → N ≥ ,as well as capacity constraints T ∈ Q d ≥ and a number of bins C . Task:
Find a packing of V into the bins which maximizes the profit. This is due to the fact that in the multi-dimensional setting, we can simply model multiple occurrencesof the same vector by introducing an additional dimension encoding the index of the vector. This is notpossible in the one-dimensional case. . Bannach et al. 7
Conjoining and Over-the-Rainbow Matchings.
We introduce two useful problems to tacklethe questions mentioned above, namely
Perfect Over-the-Rainbow Matching and
Con-joining Matching . The following section presents the reductions from
Vector Packing , Vector Covering and
Vector Multiple Knapsack to Conjoining Matching using
Perfect Over-the-Rainbow Matching as an intermediate step. By results of Gutin etal. [16], and Marx and Pilipczuk [27], we can solve
Conjoining Matching efficiently and,thus, our packing and covering problems as well. A matching in a graph G describes a set ofedges M ⊆ E ( G ) without common nodes, that is, e ∩ e = ∅ for all distinct e , e ∈ M . Amatching is perfect if it covers all nodes. In the Perfect Over-the-Rainbow Matching problem, we are given an graph G as well as a color function λ : E ( G ) → C \ {∅} whichassigns a non-empty set of colors to each edge, and an integer ‘ . For each edge e and eachcolor c ∈ λ ( e ), there is a non-negative weight γ ( e, c ). The objective is to find a perfectmatching M and a surjective function ξ : M → C with ξ ( e ) ∈ λ ( e ) for each e ∈ M such that P e ∈ M γ ( e, ξ ( e )) ≤ ‘ . The surjectivity guarantees that each color must appear at least once.We call such a pair ( M, ξ ) a perfect over-the-rainbow matching and the term P e ∈ M γ ( e, ξ ( e ))denotes its weight . This name comes from the closely related rainbow matching problem,where each color appears exactly once [21, 25]. In contrast to our problem, a sought rainbowmatching covers as many colors as possible, but not necessarily all, and the maximum sizeof a rainbow matching is bounded by the number of colors. In our variant we must coverall colors, and likely have to cover some colors more than once to get a perfect matching.Formally, the problem is defined as follows: Perfect Over-the-Rainbow Matching
Parameter:
The number of colors |C|
Input:
A graph G , a set of colors C = { , . . . , |C|} , a function λ : E → C \ {∅} ,edge weights γ : { ( e, c ) | e ∈ E ( G ) , c ∈ λ ( e ) } → Q ≥ , and a number ‘ Task:
Find a perfect over-the-rainbow matching (
M, ξ ) in G of weight at most ‘ . We sometimes omit the surjective function ξ , if it is clear from the context.Related to this problem is Conjoining Matching : We have a partition V ] · · · ] V t ofthe nodes of G and a pattern graph H with V ( H ) = { V , . . . , V t } . Instead of covering allcolors in a perfect matching, this problems asks to find a conjoining matching M ⊆ E ( G ),which is a perfect matching such that for each { V i , V j } ∈ E ( H ) there is an edge in M withone node in V i and the other in V j . Roughly speaking, each edge in H corresponds to someedges in G of which at least one has to be taken by M . Formally, the problem is given by: Conjoining Matching
Parameter:
The number of edges of H Input:
A weighted graph G = ( V, E, γ ) with γ : E → Q ≥ , a node partition V ] · · · ] V t ,a number ‘ , and a graph H with V ( H ) = { V , . . . , V t } Task:
Find a perfect matching M in G of weight at most ‘ such thatfor each edge { V i , V j } ∈ E ( H ) there is an edge { u, v } ∈ M with u ∈ V i and v ∈ V j . Gutin et al. [16, Theorem 7] and Marx and Pilipczuk [27] gave randomized fixed-parameteralgorithms for
Conjoining Matching on loop-free graphs H . We show how a simplereduction also solves the problem on graphs with loops. (cid:73) Lemma 5.
The
Conjoining Matching problem can be solved by a randomized algorithm(with bounded false negative rate in n + ‘ ) in time | E ( H ) | · n O (1) · ‘ O (1) , even if H containsself-loops. Sketch of Proof. If H does not contain self-loops the claim is proven by Gutin et al. [16]. Thecase that H does contain self-loops can be reduced to the loop-free version by a simple layering Solving Packing Problems with Few Small Items Using Rainbow Matchings argument: First direct the edges of H arbitrarily (for instance by using the lexicographicalorder of the nodes) and then define G and H as V ( H ) = { h , h | h ∈ V ( H ) } ∪ { h ∗ } ,E ( H ) = { { h i , h j } | { h i , h j } ∈ E ( H ) } ,V ( G ) = { v , v , v ∗ | v ∈ V ( G ) } ,E ( G ) = { { v , v ∗ } , { v , v ∗ } | v ∈ V ( G ) } ∪ { { v , w } | { v, w } ∈ E ( G ) } . Observe that H is loop-free, and | E ( H ) | = | E ( H ) | . Further note that, in any perfectmatching in G , for each v ∈ V ( G ) either v or v must be matched with v ∗ ; the other nodetogether with its matching partner corresponds to an edge in a corresponding perfect matchingin G as it is only connected to v ∗ or { w , w | { v, w } ∈ E ( G ) } . Finally, to preserve weights,set γ ( { v , v ∗ } ) = γ ( { v , v ∗ } ) = 0 and γ ( { v , w } = γ ( { v, w } ) for all v, w ∈ V ( G ). (cid:74) In this section, we prove Theorem 2, i. e., we show that
Vector Packing and
VectorCovering can be solved in time 4 k · k ! · n O (1) and the Multiple Knapsack problem in time4 k · k ! · n O (1) · ( p max ) O (1) respectively. The first phase to solve these packing and coveringproblems is to interpret them as Perfect Over-the-Rainbow Matching problems. Eachproblem admits a similar procedure: Guess the packing of the small vectors; guess thenumber of large vectors for each container; use these guesses to pack the large vectors byformulating the problem as a matching problem in a graph. The idea is that the nodes ofthis graph represent the large vectors. An edge represents that both endpoints can be placedinto the same container to satisfy the condition of the problem, i. e., either to fit into thecontainer or to cover it. Introducing a color and a weight function for the edges, we manageto handle the containers already filled with some small vectors and the overall profits of thepacking. Note that the guessing also serves as a transformation from the minimization andmaximization problems to decision problems as each guess also corresponds to some fixednumber of containers and if applicable to the profit. So we ask if there is a solution withthese numbers and thus we can solve this question via a reduction.
Identifying the Set of Small Vectors
Before we can proceed as mentioned above, we firstneed to identify the sets V L and V S of large and small vectors explicitly. This can be donevia a reduction to the problem as follows: The set of elements is given bythe set of vectors V and we compute all sets S ⊆ V of triplets that fit together in a singlecontainer, i. e., | S | = 3 and P v ∈ S v ≤ T . Consider a hitting set H for this instance. Thenthe set V \ H is large. To see this, consider any three distinct vectors u, v, w ∈ V \ H . Ifwe had u + v + w ≤ T , then the set { u, v, w } would be part of the computed selection ofsubsets. Yet, { u, v, w } ∩ H = ∅ —a contradiction. We can pick the given number of smallvectors k and use Fact 1 to obtain a hitting set H ⊆ V of size at most k . We set V L = V \ H and V S = H . As there are O ( n ) sets of triplets this yields a run time of 2 . k · n O (1) (seeFact 1). The Case of Packing Vectors
Recall that in the
Vector Packing problem we aregiven n vectors of dimension d and a set of containers, each with the same size limitation T ∈ Q d . Furthermore, we assume that the sets V S and V L are given explicitly by using . Bannach et al. 9 the computation explained above. Any solution needs at most |V| and at least d|V L | / e containers. Furthermore, if there is a solution with m ≤ |V| there is also a solution with m containers for any m ∈ { m + 1 , . . . , |V|} . Thus a binary search for the optimal number ofcontainers between the given bounds is possible. Let C be the current guess of the number ofcontainers. Now we have to decide whether there exists a solution using exactly C containers.We guess the packing of the small vectors, that is, we try all possible partitions into atmost min { C, k } subsets. It is not hard to see that the number of such partitions is upperbounded by the k th Bell number: The first vector is packed by itself, the second can either bepacked with the first one or also by itself, and so on. If any of the corresponding containersis already over-packed, we discard the guess. In the following, we call the used containers partially filled as some area is already occupied by small vectors. For these partially filledcontainers, we guess which of them are finalized, i. e., which of them do not contain anadditional large vector in the optimal solution, and discard them for the following steps.There are at most 2 k such guesses. We denote the number of discarded containers as C .For each of the remaining partially filled containers, we introduce a new color. Furthermore,we introduce a color > representing the empty containers if existent. Hence, the resulting setof colors C has a cardinality of at most k + 1. For each c ∈ C , we denote by s ( c ) ∈ Q d theresidual size in the corresponding container.We place the large vectors V L inside the C − C residual containers by reducing it to a Perfect Over-the-Rainbow Matching problem. Note that if the current guesses arecorrect, each of the C − C containers receives at least one and at most two large vectors.Hence, we may assume |V L | / ≤ ( C − C ) ≤ |V L | (and reject the current guess otherwise).Furthermore, the number of containers receiving one or two large items, respectively, isalready determined by C and C . We denote these numbers by C and C and remark that C = |V L | − ( C − C ) ≥ C := ( C − C ) − C = 2( C − C ) − |V L | ≥ G = ( V, E ) to find a feasible packing. Every large vector v ∈ V L is represented by two nodes v and v in V . Let V L = { v | v ∈ V L } . Next, we define aset B of 2 · C new nodes called blocker nodes , which ensures that all vectors are placed insideexactly ( C − C ) containers. We define V := V L ∪ V L ∪ B . In this graph, an edge betweenthe nodes in V L ∪ V L represents a possible packing of the large vectors inside one container.Hence, we add an edge e = { v, w } between two original vectors v, w ∈ V L and assign this edgesome color c ∈ C if these vectors fit together inside the corresponding container. Furthermore,we add an edge between a vector v ∈ V L and its copy v ∈ V L and assign it the color c ∈ C ifthe vector alone fits inside the corresponding container. More formally, we introduce the setof edges E c := {{ u, v } | u, v ∈ V L , u + v ≤ s ( c ) } ∪ {{ v, v } | v ∈ V L , v ≤ s ( c ) } for each color c ∈ C . Additionally, we introduce the edges of a complete bipartite graph between the copiednodes V L on the one hand and the blocker nodes B on the other hand. More formally, wedefine E ⊥ := {{ v , b } | v ∈ V L , b ∈ B} . Together, we get E := E ⊥ ∪ S c ∈C E c . Finally, wedefine the color function λ with λ : E → C∪{⊥} , such that each edge in E c gets color c foreach c ∈ C := C ∪ {⊥} . More formally, we define λ ( e ) := { c ∈ C ∪ {⊥} | e ∈ E c } . See Figure 1for an example of the construction. Note that the weights on the edges are irrelevant in thiscase and can be set to one, i. e. γ ( e, c ) = 1. To finalize the reduction, we have to define thesize ‘ of the matching we are looking for. We aim to find a perfect matching and hence aresearching for a matching of size ‘ := |V L | + C . Note that if C = 0 and therefore no blockernodes are introduced, we also remove the color ⊥ from the set of colors. (cid:73) Lemma 6.
There is a packing of the large vectors V L inside ( C − C ) containers such thateach container holds at least one large vector if and only if the above described instance for Perfect Over-The-Rainbow-Matching is a “yes”-instance. . . . V L . . . V L b b B { , } { , } { }{ , } Figure 1
Construction of the graph G for a Bin Packing instance with sets V S = { . , . , . } and V L = { . , . , . } . The guessed number of bins is C = 3. All small items are packed separatelyand the bin containing 0 .
15 is finalized ( C = 1). There is thus a bin containing 0 . . ⊥ used between all nodes of V L and all nodes of B are omitted. Proof.
Assume there is a packing of the vectors V L inside ( C − C ) containers such thateach container holds at least one large vector. In this case, we can construct a perfectover-the-rainbow matching M as follows. For each pair of vectors v, w ∈ V L that is assignedto the same container, we choose the corresponding edge { v, w } for the matching and assignit the corresponding color c ∈ C . For each vector v ∈ V L that is the only large vector in itscontainer, we choose the edge { v, v } for the matching and assign it the corresponding color c ∈ C . To this point all the vectors in V L are covered by exactly one matching edge sinceeach of them is contained in exactly one container.Note that in the given packing there have to be exactly C = 2( C − C ) − |V L | containerswith exactly one large vector and C = |V L | − ( C − C ) containers with exactly two largevectors. As a consequence, there are exactly 2 · C nodes in V L that are not yet covered bya matching edge since their originals are covered by edges between each other. For eachof these nodes, we choose an individual node from the set B and define the edge betweenthese nodes as a matching edge and assign it the color ⊥ . Since there are exactly 2 · C blocker nodes, we cover all nodes in V with matching edges and hence we have constructed aperfect matching. Each color c ∈ C \ {>} is represented by one partially filled container andhence each has to appear in the matching. Moreover, if the color > was introduced, that is,there were less than C − C containers partially covered by small vectors, then there was acontainer exclusively containing large vectors and hence > was used in the matching as well.Therefore, we indeed constructed a perfect over-the-rainbow matching.Conversely, assume that we are given a perfect over-the-rainbow matching M . Con-sequently, each vector in V L is covered by exactly one matching edge. As M contains atmost |V L | + C edges, and 2 · C edges are needed to cover the nodes in B , there are exactly |V L | − C = ( C − C ) matching edges containing the nodes from V L . As in M each color ispresent, we can represent each container by such a matching edge and place the correspondingvector or vectors inside corresponding containers. If a color c ∈ C \ {>} appears more thanonce, we use an empty container for the corresponding large vectors. (cid:74) To decide if there is a packing into at most C containers, we find a partition of the k small vectors with O ( k !) guesses, and the to-be-discarded containers with O (2 k ) guesses.Constructing the graph G needs O ( n k ) operations. By Theorem 3, a perfect over-the-rainbowmatching over k + O (1) colors with weight ‘ ∈ O ( n ) can be computed in time 2 k · n O (1) . Tofind the correct C we call the above algorithm in binary search fashion O (log( n )) times, aswe need at most n containers. This results in a run time of 2 k · k ! · n O (1) = 4 k · k ! · n O (1) . . Bannach et al. 11 The Case of Covering Vectors
Recall that in the
Vector Covering problem, we aregiven n vectors V of dimension d and a set of containers, each with the same size limitation T ∈ Q d . Further, we are given a partition of the vectors V into the set V L of large vectorsand and the set V S of small ones. The large vectors have the property that every subset ofthree vectors cover a container. On the other hand, we need at least two vectors to cover onecontainer (otherwise, we can remove the corresponding vectors from the instance and onlyconsider the residual instance). Hence an optimal solution covers at least bV L / c containers,while it can cover at most bV / c containers. Remark that for this problem we can searchfor the optimal number of covered containers in binary search fashion between the bounds bV L / c and bV / c : A solution covering a given number of containers can be transformedinto a solution covering one less container. In the following, we assume we are given thenumber C of containers to be covered and have to decide whether this is possible or not.Furthermore, note that each solution which contains multiple partially covered containers canbe transformed into a solution where each container is completely covered, by distributingthe vectors from the non-covered containers to the covered containers. Clearly, this mightempty some containers completely.Similar as for Vector Packing , we first guess the distribution of small vectors V S tothe (at most C ) containers. Each distribution of these vectors affects at most k containers,as |V S | = k . Since the order of the containers is irrelevant, there are at most O ( k !) distinctpossibilities to distribute these small vectors.In the next step, we remove all containers that are completely filled by now. This leaves C ≤ C containers we have to cover. Let C p ≤ C be the number of those containersthat contain a small vector from V S . We call these containers partially covered . As in thealgorithm for the Vector Packing problem, we introduce a set of colors C such that eachpartially covered container is represented by one color c ∈ C and the empty container isrepresented by one color called > . On the one hand, as we have to distribute the vectorsin V L to C containers, there are at least |V L | − · C containers with more than twolarge vectors. On the other hand, there are C P partially covered containers, and theymight need only one vector to be covered while all others need at least two vectors to becovered. Hence, the number of containers admitting more than two vectors is bounded by |V L | − · ( C − C P ) − C P = |V L | − · C + C P . Note that a container with more than threelarge vectors stays covered if one of the large vectors is removed. Hence, we can guaranteethat if containers with at most two vectors exists, there are no containers with more thanthree large vectors.In the next step, we guess the number C of containers with only one large vector in theoptimal solution. As a consequence, there are exactly |V L | − C large vectors that have tobe placed inside containers with more than one large vector and C − C such containers.Consequently, there are exactly C := ( |V L | − C ) − C − C ) containers with three largevectors and hence C := C − C − C container with exactly two large vectors. Note thatfor this guess there are at most O ( k ) options since only partially filled containers can becovered by only one large vector.Knowing these values, we construct a graph G = ( V, E ) for a
Perfect Over-the-Rainbow Matching problem as follows. Similar as above the set of nodes is a combinationof nodes generated for the large vectors and some blocker nodes. Again each vector v ∈ V L has a node in the graph as well as a copy of itself v ∈ V L . Furthermore, we introduce twosets of blocker nodes B and B such that |B | = |V L | − C and |B | = |V L | − ( C + 2 · C ).For each pair of nodes v, w ∈ V L , we introduce one edge e = { v, w } and assign thecolor c to it if these two vectors together cover the corresponding container that includes the small vector associated with color c . Similarly, we introduce an edge e = { v, v } betweena vector v ∈ V L and its copy v ∈ V L and assign it with the color c ∈ C if this vectoralone covers the corresponding container. More precisely, we introduce for each c ∈ C theedge set E c := {{ u, v } | u, v ∈ V L , s ( u ) + s ( v ) ≥ s ( c ) } ∪ {{ v, v } | v ∈ V L , v + v ≥ s ( c ) } .Additionally, we introduce all edges between the copied nodes V L and the blocker nodes B to ensure that exactly C vectors are placed alone inside their containers. Furthermore,we introduce all edges between the blocker nodes B and the vector nodes V L , to ensurethat exactly C + 2 · C vectors are placed inside the containers. More formally, we define E ⊥ := {{ v , b } | v ∈ V L , b ∈ B } ∪ {{ v, b } | v ∈ V L , b ∈ B } where ⊥ 6∈ C . Together, we get E := E ⊥ ∪ S c ∈C E c .Finally, we have to define the color function λ , the weight function γ and the maximalweight of the matching ‘ . We define λ : E → C∪{⊥} , e
7→ { c | c ∈ C ∪ {⊥} , e ∈ E c } and γ ( e, c ) = ( , if c ∈ C \ {⊥ , >} , , otherwise,for all e = { v, w } ∈ E and c ∈ C with c ∈ λ ( e ). We want to allow each color for a partiallyfilled container to be taken at most once and, hence, search for a matching with weight atmost ‘ := |C \ {⊥ , >}| . (cid:73) Lemma 7.
There is a covering for the C containers using the large vectors V L such thatthere are exactly C containers with one large vector and C containers with two large vectorsif and only if the above described graph G has a perfect over-the-rainbow matching M ofweight ‘ = |C \ {⊥ , >}| . Proof.
Assume there is a packing of the vectors V L inside the C container, such that thereare exactly C containers with one large vector and C containers with two large vectors. Inthis case, we can construct a perfect over-the-rainbow matching M as follows. For each pairof vectors v, w ∈ V L that is assigned to the same container, we choose the corresponding edge { v, w } for the matching and assign it the corresponding color. For each vector v ∈ V L that isthe only large vector in its container, we choose the edge { v, v } for the matching and assignit the corresponding color. To this point exactly C nodes from the set V L are covered bythe matching and exactly C + 2 · C nodes from the set V L are covered by matching edges.As a consequence there are exactly |V L | − C nodes in V L that still need to be covered andexactly |V L | − ( C + 2 · C ) nodes in V L that need to be covered. For each of the |V L | − C nodes in V L , we choose one individual node from the |V L |− C nodes in B arbitrarily, add thecorresponding edge to the matching M and assign it the color ⊥ . For the |V L | − ( C + 2 · C )nodes in V L , we choose one individual node from the |V L |− ( C +2 · C ) nodes in B arbitrarilyand add the corresponding edge to the matching M and assign it the color ⊥ .Obviously M covers all the nodes in the graph and is a matching and hence M is a perfectmatching. Furthermore, each color C \ {⊥ , >} is chosen exactly once while the color > ischosen at least once. Since the edges with colors in C \ {⊥ , >} have a weight of 1 while allother edges have a weight of 0, M has a weight of exactly |C \ {⊥ , >}| = ‘ .On the other hand, assume that we are given a perfect over-the-rainbow matching M for G with weight exactly ‘ = |C \ {⊥ , >}| . Since all the colors have to appear at leastonce, and each edge with color in C \ {⊥ , >} has a weight of exactly 1 each of these colorscan appear at most once. Since M is a perfect matching, each vector in V L is coveredby exactly one matching edge and M contains exactly | V | / |V L | − ( C + C ) edges.Exactly |B | = |V L | − C nodes in V L are contained in edges between V L and B , since all theneighbors of nodes in B can be found in V L . Hence, there are exactly |V L | − |B | = C edges . Bannach et al. 13 with colors c ∈ C in M that are between the nodes in V L and their corresponding copies in V L .We place the corresponding vectors alone in the container corresponding to the color of theedge. The remaining |V L | − C nodes in V L are covered by edges between each other or byedges to the set B . Since |B | = |V L | − ( C + 2 · C ) and these nodes only have neighbors inthe set V L there are exactly |V L | − ( C + 2 · C ) nodes in V L that share a matching edge witha node in B . Hence the remaining ( |V L | − C ) − ( |V L | − ( C + 2 · C )) = 2 · C nodes haveto be paired by matching edges. We place these vectors pairwise inside the correspondingcontainers. The residual vectors are distributed in groups of three and place inside theresidual C − C − C container. Since each of the colors for the partially filled containersappears exactly once in the matching, all the containers are covered by this assignment. (cid:74) In the following, we summarize the run time of the above described algorithm to decidewhether there is a packing into at most C containers. Finding the correct partition ofthe k small vectors can be done in O ( k !) guesses. Finding the number of containers withat most one large vector can be done in O ( k ) guesses. Finally, the construction of thegraph G needs at most O ( n k ) operations. Hence the run time can be summarized as2 k · k ! · n O (1) ≤ k · k ! · n O (1) . Lastly, to find the correct C we have to call the above algorithmin binary search fashion at most O (log( n )) times, since we can cover at most n containers. The Case of Packing Vectors with Profits
Recall that in the
Vector Multiple Knap-sack problem, we are given a set V of n vectors with dimension d , a profit function p : V → N ≥ and C containers each with capacity constraint T ∈ Q d . Furthermore, we aregiven a partition of the vectors V into small V S and large V L .Again, we guess the distribution of the small vectors. However, since it might not beoptimal to place all the small vectors, we first have to guess which subset of them is chosenin the optimal solution. There are at most 2 k · k ! possibilities for both guesses. After thisstep, we have at most k containers which are partially filled with small vectors.In the next step, we guess for each partially filled containers whether they contain anadditional large vector and discard the containers that do not. There are at most 2 k possiblechoices for this. Let C be the number of such discarded containers. This step leaves C − C containers for the large vectors in V L . Again, we define a color for each remaining partiallyfilled container and one color > for the empty containers resulting in a set C of at most k + 1colors.Similar as for the problems Vector Packing and
Vector Covering , we constructa graph G = ( V, E ) to find the profit maximal packing. We introduce one node for eachvector in v ∈ V L and a node for its copy v ∈ V L . Furthermore, we introduce a set B of2 · |V L | − · ( C − C ) blocker nodes to ensure that we use exactly C − C container. Wedefine a profit of zero for the copy nodes and the blocker nodes, while the nodes for theoriginal vectors v ∈ V L have profit p ( v ).We add an edge between two nodes v, w ∈ V L and assign it the color c ∈ C if the vectorstogether fit inside the corresponding container assigned with color c . Furthermore, we addan edge between a node v ∈ V L and its copy v ∈ V L and assign it the color c ∈ C if it fitsalone inside the corresponding container. More formally, we define for each color c ∈ C theset E c := {{ u, v } | u, v ∈ V L , s ( u ) + s ( v ) ≤ s ( c ) } ∪ {{ v, v } | v ∈ V L , v + v ≤ s ( c ) } . Finally,we connect each node from the set B with each node from the set V L ∪ V L , i.e, we define E ⊥ := {{ v, b } | v ∈ V L ∪ V L , b ∈ B} . In total, we set E := E ⊥ ∪ S c ∈C E c .Finally, we define the color function λ and the profit function γ . For this purpose wedenote p max := max { p ( v ) | v ∈ V L } and define λ : E → C∪{⊥} , e
7→ { c | c ∈ C ∪ {⊥} , e ∈ E c } as well as γ ( { v, w } , c ) := ( p max − ( p ( v ) + p ( w )) , v, w ∈ V L ∪ V L , , otherwise,for all e = { v, w } ∈ E and c ∈ C with c ∈ λ ( e ). Obviously, all the weights are non-negative. (cid:73) Lemma 8.
There is a packing of the large vectors inside the corresponding C − C containerswith profit at least p if and only if there is a perfect over-the-rainbow matching M in G withweight at most ( C − C ) p max − p . Proof.
Assume, we are given a packing of large vectors inside the C − C containers withprofit at least p . For each packing of large vectors inside one container, we choose the edgebetween the corresponding pair of vectors (or between the vector and its copy in the casethat the container has only one vector) for the matching and assign it the correspondingcolor. Now there are exactly 2 · ( C − C ) nodes in V L ∪ V L covered by the matching. Theremaining |V L ∪ V L | − · ( C − C ) nodes in V L ∪ V L are paired with one arbitrary node in B .Since B contains exactly 2 |V L | − · ( C − C ) nodes, each node can be paired.The obtained matching M is a perfect matching, since each node is covered. Furthermore,each color in C \ {>} is used exactly once by definition of the colors. Let V L,S ⊆ V L bethe set of large vectors packed in the given solution. By definition of the solution it holdsthat p ( V L,S ) ≥ p . Note that the weight of an edge between two nodes v, w ∈ V L is given by2 p max − ( p ( v ) + p ( w )), while edges between a node v ∈ V L and its copy v have the weight2 p max − p ( v ). All the edges to the blocker nodes B have weight 0. Hence the weight of thematching is given by 2( C − C ) p max − p ( V S ) ≤ C − C ) p max − p , which proves the firstimplication.To prove the other direction, assume that we are given a perfect over-the-rainbow matchingwith weight at most 2( C − C ) p max − p . Each of the 2 |V L | − · ( C − C ) blocker nodes in B is matched to exactly one node in |V L ∪ V L | . As a result there are exactly 2 · ( C − C ) nodesin |V L ∪ V L | that are paired by the matching. We place the corresponding vectors inside thecorresponding containers with regard to the color of the matching edge. If a color c ∈ C \ {>} appears more than once, we use an empty container.This packing is valid, since each color appears at least once and hence we can fill eachcontainer. Let V L,S ⊆ V L be the set of large vectors that are matched with a node fromthe set V L ∪ V L . Then, by definition of the weight function, the matching has a size of2( C − C ) p max − P v ∈V L,S p ( v ) ≤ C − C ) p max − p . As a consequence the profit of thepacking is given by P v ∈V L,S p ( v ) ≥ p . (cid:74) We can summarize the steps of the algorithm as follows. For each choice of small itemsand each possibility to distribute these items, the algorithm considers each choice of partiallyfilled containers that do not contain an additional large item. For each of these choicesthe algorithm constructs the graph described above. Then it performs a binary searchfor a perfect over-the-rainbow matching with the smallest possible weight in the bounds[0 , C − C ) p max ]. Finally, it returns the packing with the largest total profit found amongall possibilities.By Theorem 3, we need at most 2 |C| · n O (1) · ( p max ) O (1) = 2 k +2 · n O (1) · ( p max ) O (1) operations to solve the constructed Perfect Over-the-Rainbow Matching problem.Finding the correct choice and partition of the k small vectors can be done in O ( k · k !) guesses.To find the containers without a large vector can be done in O (2 k ) guesses. Finally theconstruction of the graph G needs at most O ( n k ) operations. The binary search procedure . Bannach et al. 15 over the profits can be done in at most O (log(( C − C ) p max )) = O (log( n ) + log( p max ))operations, since the number of containers is bounded by n O (1) . Hence, the run time is2 k +2 · k ! · n O (1) · ( p max ) O (1) = 4 k · k ! · n O (1) · ( p max ) O (1) . In the previous section, we have reduced several packing and covering problems to
PerfectOver-the-Rainbow Matching problems. Of course, all this effort would be in vainwithout the means to find such matchings. This section presents a reduction to the task offinding a conjoining matching, which results in a parameterized algorithm for finding perfectover-the-rainbow matchings by applying Lemma 5. Overall, this proves Theorem 3, which isrepeated below for convenience:
Claim of Theorem 3.
There is a randomized algorithm (with bounded false negative rate in n + ‘ ) that solves Perfect Over-The-Rainbow Matching in time | C | · n O (1) · ‘ O (1) . We aim to construct graphs H and G such that G has a perfect over-the-rainbowmatching if, and only if, G has a perfect conjoining matching with respect to H of the sameweight. Recall that in an over-the-rainbow matching, we request an edge of every color to bepart of the matching; while in a conjoining matching, we request edges between certain sets ofnodes to be part of the matching. For the reduction, we transform G into G , . . . , G |C| whereeach G c is a copy of G containing only edges of color c . Hence, V ( G c ) = { v c | v ∈ V ( G ) } ,i.e., v c is the copy of v ∈ V ( G ) in V ( G c ), and G c contains only edges e ∈ E ( G ) with c ∈ λ ( e ).We set G to be the disjoint union of the G c while setting V ( H ) = { V ( G c ) | c ∈ C } and E ( H ) = { { h, h } | h ∈ V ( H ) } . Now a conjoined matching contains an edge of everycolor—however, the same edge of G could be used in multiple ways in the different copies G c .To address this issue, we introduce a gadget that will enforce any perfect matching in G to use at most one copy of every edge of G . In detail, for every node v ∈ V ( G ) we willadd an independent set J ( v ) of size |C| − G . Furthermore, we will fully connect J ( v )to all copies of v in G , that is, we add the edges { v c , x } for all c ∈ C and x ∈ J ( v ) to G .This construction is illustrated in Figure 2. Observe that in any perfect matching of G allelements of J ( v ) must be matched and, thus, we “knock-out” | J ( v ) | = |C| − v in G —leaving exactly one copy to be matched in one G c . We add one more node to H that represents the union of all the sets J ( v ) and has no connecting edge. To complete thedescription of the reduction, let us describe the weight function of G : For each e ∈ E ( G ),we define γ ( e ) := ( γ ( e, c ) if e ∈ E ( G c ) for some c ∈ C γ ( e ) = 0 for each e with e ∩ J ( v ) = ∅ for some v ∈ V ( G ). (cid:73) Lemma 9.
Let (cid:0)
G, λ, γ (cid:1) be a colored and edge-weighted graph, and let G and H be definedas above. There is a perfect over-the-rainbow-matching M of weight ‘ in G if, and only if,there is a perfect conjoining matching M of weight ‘ in G . Proof.
First, let us consider a perfect over-the-rainbow matching M in G . Let ξ : M → C be a surjective function with ξ ( e ) ∈ λ ( e ) for all e ∈ E ( G ). We have to show that there isa perfect conjoining matching in G . Let M = {{ v ξ ( e ) , w ξ ( e ) } | e = { v, w } ∈ M } . Since M is a matching in G , M is a matching in G ; and since ξ is surjective we have that M contains at least one edge in every copy G c of G in G and, thus, M is actually a conjoining G : v G : v v v v J ( v ) Figure 2
Reducing the problem of finding a perfect over-the-rainbow matching to the problemof finding a perfect conjoining matching. Left: single node of the colored input graph with a thickedge from a perfect matching. Right: |C| = 4 copies of v in G ; the corresponding subgraphs G c only contain edges of a single color. At the bottom, the added set J ( v ) which is fully connected toall copies of v . The thick edges indicate how these nodes are paired in a perfect matching. matching. Furthermore, there is a bijection f ( e ) = e ξ ( e ) between M and M such that γ ( e, ξ ( e )) = γ ( f ( e )), which implies that the total weight of both matchings is the same.By the definition of M , for every node v ∈ V ( G ), there is exactly one color c ∈ C forthat there is an edge in M containing v c . Therefore, the set { v , . . . , v |C| } contains exactly |C| − v ∈ V ( G ). We conclude that M can be extended to a perfectconjoining matching M by paring these nodes with J ( v ). Observe that M has the sameweight as M as the added edges have weight zero. Therefore, M has the same weight as M .For the other direction, let us consider a perfect conjoining matching M in G . Observethat for all nodes v ∈ V ( G ) the nodes in J ( v ) have to be matched by M and, thus, for allnodes v ∈ V ( G ) there is exactly one node α ( v ) ∈ { v , . . . , v |C| } that is not matched withan element of J ( v ). We define the set M = { { v, w } | v, w ∈ V ( G ) and { α ( v ) , α ( w ) } ∈ M } and claim that M is a perfect over-the-rainbow matching of the same weight as M . Firstobserve that all v ∈ V ( G ) are matched by M , since M is a perfect matching and, thus,matches α ( v ) with, say, w i . Observe that by the definition of α we have w i J ( v ) and bythe construction of G we have that w i is a copy of some w ∈ V ( G ) (it can, in particular,not be part of any other J ( u )). Since w i is paired with α ( v ), we conclude α ( w ) = w i and,thus, { α ( v ) , α ( w ) } ∈ M . Further, notice that every v ∈ V ( G ) can be matched by at mostone element of M , as M is a perfect matching and, thus, matches α ( v ) with exactly oneother node. We conclude that M is a perfect matching of G . Finally, for all { v, w } ∈ M observe that { α ( v ) , α ( w ) } must lie in some copy G c of G in G . We define ξ ( { v, w } ) = c and γ ( { v, w } , c ) = γ ( { α ( v ) , α ( w ) } ). Observe that ξ is surjective since M is conjoining and, thus,witnesses that M is a perfect over-the-rainbow matching of G .To conclude the proof, notice that M has the same weight as M as for any edge of M that has non-zero weight (that is, any edge that is not connected to some J ( v )), we haveadded exactly one edge of the same weight to M . (cid:74) Proof of Theorem 3.
Let (cid:0)
G, λ, γ, ‘ (cid:1) be an instance of
Perfect Over-the-RainbowMatching . We construct in polynomial time an instance ( G , H , γ , ‘ ) of ConjoiningMatching , where the partition of V ( G ) is defined as V ( G ) ˙ ∪ V ( G ) ˙ ∪ . . . ˙ ∪ V ( G |C| ) ˙ ∪ J with J = S v ∈ V ( G ) J ( v ) and E ( H ) contains one self loop for each V ( G c ), c ∈ C . By Lemma 9,( G , H , γ , ‘ ) has a perfect conjoining matching of weight ‘ if, and only if (cid:0) G, λ, γ, ‘ (cid:1) has aperfect over-the-rainbow matching of weight ‘ . We apply Lemma 5 to find such a conjoiningmatching in time 2 | E ( H ) | · n O (1) · ‘ O (1) . Observe that | E ( H ) | = |C| and, thus, we can findthe sought perfect over-the-rainbow matching in time 2 |C| · n O (1) · ‘ O (1) . (cid:74) . Bannach et al. 17 b s b s l l b s b s l l Figure 3
Proof of Claim 10. The light gray rectangles denoted by s and s represent the load ofsmall items on each of the two bins b and b . The dark gray areas denoted with L and l representlarge items in the bin b We now present a fully-deterministic algorithm for
Bin Packing . The price we have topay for circumventing the randomness is an increased run time as we avoid the polynomialidentity testing subroutine. On the bright side, this makes the algorithm straightforwardand a lot simpler. We anticipate that extending this algorithm for
Vector Packing seemsquite challenging. The main obstacle here is to identify the maximum item size in some sets,a task for which there does not seem to be a sensible equivalent notion for vectors.
About the Structure of Optimal Solutions.
In the following, we prove the existence of anoptimal solution that admits some useful properties regarding the placement of large itemsrelating to small ones. These properties are utilized in the algorithm later on. (cid:66)
Claim 10.
There exists an optimal solution where the total size of small items on each bincontaining only small items is larger than the total size of small items on each bin containingadditionally large items.
Proof.
Suppose an optimal solution, where the stated property is violated. Thus, there existstwo bins b and b , where the total size s of small items on b only admitting small items issmaller than the total size s of small items on b where also large items are placed ( s ≤ s ).We can now swap the sets of small items in b and b . Since s ≤ s , the load of b becomessmaller when now containing small items with load s . On the other hand, the total loadon b is now s . Since this entire set was placed on one bin before, b is not over packed.We can iterative repeat this step until the property is satisfied for all bins. The proof isillustrated in Figure 3. (cid:67)(cid:66) Claim 11.
Given an optimal solution and an arbitrary order of the bins containing smallitems and exactly one large item. We can repack these large items correctly using a largestfitting approach with respect to the order of the bins. In detail, we place greedily the largestfitting item into the current bin.
Proof.
Consider the bins containing small items and exactly one large item in the given order.If the current bin b contains the largest fitting item regarding all items being packed on thelater bins regarding the order, we consider the next bin. Otherwise, we swap the item i b inside this bin with the largest item i max that fits inside this bin and was not placed inside abin which was considered before. Note that the size of i b has to be at most the size of i max ,since i max is the largest item, that fits inside b . As a consequence, no bin is over-packedafter this swap since the total size of the items inside the other bin decreases or stays thesame. (cid:67) (cid:66) Claim 12.
Consider an optimal solution where each partially filled bin contains exactlytwo items. Let i s be the smallest large item and i ‘ be the largest one and let them fit togetherinside a partially filled bin. Then there exists an optimal solution, where i s is positionedinside a partially filled bin, together with the largest large item, that does fit additionally. Proof.
Consider the optimal solution and the position of the smallest large item i s . If i s ispositioned inside a partially filled bin, we can swap the additional large item with the largestitem that fits together with i s into this bin. Since this swap replaces an item inside one otherbin with a smaller item, the total size of the items inside this bin decreases and hence no binis be over-packed.If i s is not positioned inside a partially filled bin, then it fits together with the other largeitem it is currently paired with into a partially filled bin since we assumed that i s even fitstogether with the largest item i l into a partially filled bin. We swap this pair with the twolarge items of one (arbitrary) fitting partially filled bin. After this swap no bin is over-packedsince the other two large items fit in a partially filled bin and hence they fit inside an emptybin as well. Finally, we swap the item that is currently paired with the small item with thelargest item that fits inside this bin together with i s . As seen above, after this swap there isno bin that is over-packed. (cid:67)(cid:66) Claim 13.
Consider an instance I , where the largest large item i ‘ does not fit togetherwith the smallest large item i s inside any partially filled bin and there is an optimal solution,where all partially filled bins contain exactly two large items. Then there is an optimalsolution which places i ‘ together with the largest fitting large item inside one bin, or i ‘ isplaced alone inside a bin, if there no large item fits together with i ‘ inside one bin. Proof.
Consider an optimal solution for the given instance I , where each partially filled bincontains exactly two large items and i ‘ and i s do not fit together inside a partially filled bin.Consider the bin b containing the item i ‘ . Obviously i ‘ is not contained inside a partiallyfilled bin, since it does not fit together with the smallest large item inside a bin and hence itcannot fit together with an other large item inside a partially filled bin. If there consider thelargest item i that does fit together with i ‘ inside one bin and let b be the bin containingthis item. We can swap the item i + (if existent) that is currently placed together with i ‘ withthe item i . Since the item i + has at most the size of the item i the bin b is not over-packedby this step. On the other hand, since i ‘ and i fit together inside a bin, and there is no smallitem inside b this bin is not over-packed as well.If there is no large item that fits together with b inside one bin, and b does not containany small items, b is contained alone inside its bin in this case. (cid:67) The Complete Algorithm.
In the first step of the algorithm, we sort the items regardingtheir sizes in O ( n log( n )). Next, we guess the distribution of the small items. Since there areat most k small items, there are at most O ( k !) possible guesses. We call the bins containingsmall items partially filled bins. There are at most k of these bins.Then, we guess a bin b that does not contain any additional large item. All the partiallyfilled bins, containing small items with a larger total size than b do not contain any largeitem as well, see Claim 10. Thus we can discard them from the following considerations.There are at most k possibilities for the guess of b .Now, we guess which of remaining partially filled bins only contain one large item. Thereare at most O (2 k ) possibilities. We consider all partially filled bins for which we guessedthat they only contain one large item in any order and pair them with the largest fittingitem. By Claim 10, we know that an optimal packing with this structure exists. Afterwards,we discard these bins from the following considerations. . Bannach et al. 19 It remains to pack the residual large items. Each residual, partially filled bin containsexactly two large items in the optimal solution, otherwise the guess was wrong. To place thecorrect large item, we proceed as follows: Iterate through the large items in non-ascendingorder regarding their sizes. Let i ‘ be the currently considered item. Further, let i s be thesmallest large item from the set of large items that still need to be placed. Depending on therelation between i ‘ and i s , we place at least one of these two items inside a bin. For the firstcase, it holds that i ‘ does not fit together with i s inside a partially filled bin. Then, we place i ‘ together with the largest fitting item i from the set of large items that are not alreadyplaced inside one empty bin or place it alone inside an empty bin if such an item does notexist. The item i can be found, or its non-existence be proved, in time O (log( n )). For thesecond case, it holds that i ‘ together with i s does fit inside one partially filled bin. Then,we guess which partially filled bin contains i s and place it inside this bin together with thelargest unplaced item that fits inside this bin. The largest fitting item can be found in time O (log( n )), and there are at most O ( k !) possible guesses total.In the following, we argue that in both cases there exists an optimal solution where theitems are placed exactly as the algorithm does assuming all the guesses are correct. Whenall the previous steps are correct, we can consider the residual set of items as a new instance,where there exists an optimal solution, where all partially filled bins contain exactly twolarge items (and we already know the correct distribution of small items). For this newinstance we fill one bin correctly due to Claim 13 in Case 1. Since this bin is filled correctlywith respect to an existing optimal solution, we again can consider the residual set of itemsas an independent instance that needs solving. On the other hand in Case 2, we know byClaim 12, that there exists an optimal solution for this reduced instance where i s is placedtogether with the largest fitting large item inside one partially filled bin. If we guess thisbin correctly, we have filled one bin correctly with regard to the considered instance. Hencewhen reducing the considered instance to the residual set of items (without this just filledbin) there exists an optimal solution for this instance with exactly one less bin.After placing all the large items, we compare the obtained solution with the so far bestsolution, save it if it uses the smallest number of bins so far, and backtrack to the lastdecision. Since it iterates all possible guesses, this algorithm generates an optimal packingand its run time is bounded by O (( k !) · k · k · n log( n )). We provided a randomized algorithm with one-sided error to identify perfect over-the-rainbow matchings. Via reductions to this problem, we obtained randomized 4 k · k ! · n O (1) -time algorithms for the vector versions of Bin Packing , Multiple Knapsack , and
BinCovering parameterized by the number k of small items. We believe that studying thisparameter is a natural step towards the investigation of the stronger parameterizations bythe number of distinct item types. In that setting, the number of small items can then belarge—however, there are only few small-item-types and, thus, we may hope to adapt someof the techniques developed in this article to this setting.As a working horse we used a randomized algorithm to find conjoining matchings. Asmentioned by Marx and Pilipczuk, it seems challenging to find such matchings by a determ-inistic fixed-parameter algorithm. Alternatively, we could search directly for deterministicalgorithms for the problems as presented in this article. We present such an algorithm for
Bin Packing ; however, the techniques used in its design do not seem to generalize to thevector version.
References N. Alon, Y. Azar, J. Csirik, L. Epstein, S. V. Sevastianov, A. P. A. Vestjens, and G. J.Woeginger. On-line and off-line approximation algorithms for vector covering problems.
Algorithmica , 21(1):104–118, 1998. L. Babel, B. Chen, H. Kellerer, and V. Kotov. Algorithms for on-line bin-packing problemswith cardinality constraints.
Discrete Appl. Math. , 143(1-3):238–251, 2004. N. Bansal, M. Eliás, and A. Khan. Improved approximation for vector bin packing. In
Proc.SODA 2016 , pages 1561–1579, 2016. H. I. Christensen, A. Khan, S. Pokutta, and P. Tetali. Approximation and online algorithmsfor multidimensional bin packing: A survey.
Comput. Sci. Rev. , 24:63–79, 2017. J. Csirik, J. B. G. Frenk, M. Labbé, and S. Zhang. On the multidimensional vector bin packing.
Acta Cybern. , 9(4):361–369, 1990. M. Cygan, F. V. Fomin, Ł. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk,and S. Saurabh.
Parameterized algorithms . Springer, 2015. L. Epstein, L. M. Favrholdt, and A. Levin. Online variable-sized bin packing with conflicts.
Discrete Opt. , 8(2):333–343, 2011. L. Epstein, A. Levin, and R. van Stee. Approximation schemes for packing splittable itemswith cardinality constraints.
Algorithmica , 62(1-2):102–129, 2012. S. Fafianie and S. Kratsch. A shortcut to (sun)flowers: Kernels in logarithmic space or lineartime. In
Proc. MFCS 2015 , volume 9235 of
Lecture Notes Comput. Sci. , 2015. M. R. Garey, R. L. Graham, D. S. Johnson, and A. C. Yao. Resource constrained schedulingas generalized bin packing.
J. Combinatorial Theory, Ser. A , 21(3):257–298, 1976. M. R. Garey and D. S. Johnson. Approximation algorithms for bin packing problems: Asurvey. In
Analysis and design of algorithms in combinatorial optimization . Springer, 1981. S. Geng and L. Zhang. The complexity of the 0/1 multi-knapsack problem.
J. Comput. Sci.Tech. , 1(1):46–50, 1986. M. X. Goemans and T. Rothvoß. Polynomiality for bin packing with a constant number ofitem types. In
Proc. SODA 2014 , pages 830–839, 2014. T. F. Gonzalez.
Handbook of approximation algorithms and metaheuristics . Chapman andHall/CRC, 2007. J. Guo, F. Hüffner, and R. Niedermeier. A structural view on parameterizing problems:Distance from triviality. In
Proc. IWPEC 2004 , volume 3162 of
Lecture Notes Comput. Science ,pages 162–173, 2004. G. Z. Gutin, M. Wahlström, and A. Yeo. Rural postman parameterized by the number ofcomponents of required edges.
J. Comput. Syst. Sci. , 83(1):121–131, 2017. R. Hoberg and T. Rothvoß. A logarithmic additive integrality gap for bin packing. In
Proc.SODA 2017 , pages 2616–2625, 2017. K. Jansen and K. Klein. About the structure of the integer cone and its application to binpacking. In
Proc. SODA 2017 , pages 1571–1581, 2017. K. Jansen, S. Kratsch, D. Marx, and I. Schlotter. Bin packing with fixed number of binsrevisited.
J. Comput. Syst. Sci. , 79(1):39–49, 2013. K. Jansen and R. Solis-Oba. A polynomial time
OPT + 1 algorithm for the cutting stockproblem with a constant number of object lengths.
Math. Oper. Res. , 36(4):743–753, 2011. M. Kano and X. Li. Monochromatic and heterochromatic subgraphs in edge-colored graphs–Asurvey.
Graphs and Combinatorics , 24(4):237–263, 2008. H. Kellerer, U. Pferschy, and D. Pisinger.
Knapsack Problems . Springer, 2004. C. Kenyon. Best-fit bin-packing with random order. In
Proc. SODA 1996 , 1996. J. Kuipers. Bin packing games.
Math. Meth. Oper. Res. , 47(3):499–510, 1998. V. B. Le and F. Pfender. Complexity results for rainbow matchings.
Theor. Comput. Sci. ,524:27–33, 2014. . Bannach et al. 21 S. Martello and P. Toth.
Knapsack problems . Wiley-Interscience Series in Discrete Mathematicsand Optimization. John Wiley & Sons, Ltd., Chichester, 1990. Algorithms and computerimplementations. D. Marx and M. Pilipczuk. Everything you always wanted to know about the parameterizedcomplexity of subgraph isomorphism (but were afraid to ask). In
Proc. STACS 2014 , volume 25of
Leibniz Int. Proc. Informatics , pages 542–553, 2014. S. Th. McCormick, S. R. Smallwood, and F. C. R. Spieksma. A polynomial algorithm formultiprocessor scheduling with two job lengths. In
Proc. SODA 1997 , pages 509—-517, 1997. S. Th. McCormick, S. R. Smallwood, and F. C. R. Spieksma. A polynomial algorithm formultiprocessor scheduling with two job lengths.
Math. Oper. Res. , 26(1):31–49, 2001. M. Mnich and R. van Bevern. Parameterized complexity of machine scheduling: 15 openproblems.
Comput. & OR , 100:254–261, 2018. K. Mulmuley, U. V. Vazirani, and V. V. Vazirani. Matching is as easy as matrix inversion.
Combinatorica , 7(1):105–113, 1987. R. Niedermeier.
Invitation to Fixed-Parameter Algorithms . Oxford University Press, 2006. R. Niedermeier and P. Rossmanith. An efficient fixed-parameter algorithm for 3-hitting set.
J.Discrete Algorithms , 1(1):89–102, 2003. P. W. Shor.
Random planar matching and bin packing . PhD thesis, Massachusetts Institute ofTechnology, Department of Mathematics, 1985. P. W. Shor. The average-case analysis of some on-line algorithms for bin packing.
Combinatorica ,6(2):179–200, 1986. M. Sorge, R. van Bevern, R. Niedermeier, and M. Weller. A new view on rural postman basedon Eulerian extension and matching.
J. Discrete Algorithms , 16:12–33, 2012. R. van Bevern. Towards optimal and expressive kernelization for d -hitting set. Algorithmica ,70(1):129–147, 2014. G. J. Woeginger. There is no asymptotic PTAS for two-dimensional vector packing.