Approximation Algorithms for Generalized Multidimensional Knapsack
aa r X i v : . [ c s . D S ] F e b Approximation Algorithms for Generalized MultidimensionalKnapsack
Arindam Khan ∗ Eklavya Sharma ∗ K. V. N. Sreenivas ∗ Abstract
We study a generalization of the knapsack problem with geometric and vector constraints.The input is a set of rectangular items, each with an associated profit and d nonnegativeweights ( d -dimensional vector), and a square knapsack. The goal is to find a non-overlappingaxis-parallel packing of a subset of items into the given knapsack such that the vectorconstraints are not violated, i.e., the sum of weights of all the packed items in any of the d dimensions does not exceed one. We consider two variants of the problem: ( i ) the items arenot allowed to be rotated, ( ii ) items can be rotated by 90 degrees.We give a (2 + ε )-approximation algorithm for this problem (both versions). In theprocess, we also study a variant of the maximum generalized assignment problem (Max-GAP), called Vector-Max-GAP, and design a PTAS for it. The knapsack problem is a fundamental well-studied problem in the field of combinatorialoptimization and approximation algorithms. It is one of Karp’s 21 NP-Complete problemsand has been studied extensively for more than a century [15]. A common generalization ofthe knapsack problem is the 2-D geometric knapsack problem. Here, the knapsack is a unitsquare and the items are rectangular objects with designated profits. The objective is to packa maximum profit subset of items such that no two items overlap and all the items are packedparallel to the axes of the knapsack (called axis-parallel or orthogonal packing). There are twofurther variants of the problem depending on whether we are allowed to rotate the items by 90degrees or not. Another well-studied variant of the knapsack problem is d -D vector knapsackproblem, where we are given a set of items, each with d -dimensional weight vector and a profit,and a d -dimensional bin ( d is a constant). The goal is to select a subset of items of maximumprofit such that the sum of all weights in each dimension is bounded by the bin capacity in thatdimension.In this paper we study a natural variant of knapsack problem, called (2 , d ) Knapsack, whichconsiders items with both geometric dimensions and the vector dimensions. In this 2-D geo-metric knapsack problem with d -D vector constraints (where d is a constant), or (2 , d ) KS inshort, the input set I consists of n items. For any item i ∈ I , let w ( i ) , h ( i ) , p ( i ) denote thewidth, height and the profit of the item, respectively, and let a ( i ) = w ( i ) h ( i ) denote the area ofthe item. Also, for any item i ∈ I , let v j ( i ) denote the weight of the item in the j th dimension.The objective is to pack a maximum profit subset of items J ⊆ I into a unit square knapsackin an axis-parallel, non-overlapping manner such that for any j ∈ [ d ], P i ∈ J v j ( i ) ≤ ∗ Department of Computer Science and Automation, Indian Institute of Science, Bengaluru, India. [email protected] , [email protected] , [email protected] d -dimensional weightvector associated with every item and a d -dimensional global weight constraint on the wholesetup of machines. The objective is to find the maximum value obtainable so that no machine’scapacity is breached and the overall weight of items does not cross the global weight constraint. Our two main contributions are ( i ) a (2 + ε ) approximation algorithm for the (2 , d ) KS problem( ii ) define a new problem called Vector-Max-GAP and obtain a PTAS for it.To obtain the approximation algorithm for (2 , d ) KS, we first obtain a structural result usingcorridor decomposition, which shows that if we consider an optimal packing and remove ei-ther vertical items (height ≫ width) or horizontal items (width ≫ height), we can restructure theremaining packing to possess a ‘nice’ structure that can be efficiently searched for.In [9], which deals with the 2-D geometric knapsack problem, the authors use the PTAS forthe maximum generalized assignment problem (Max-GAP) to efficiently search for such a nicestructure. But due to the presence of vector constraints, we needed a generalization of theMax-GAP problem which we call the Vector-Max-GAP problem and obtain a PTAS for it toobtain the desired approximation algorithm for (2 , d ) KS. The classical knapsack problem admits a fully polynomial time approximation scheme (FPTAS)[14, 21]. Both 2-D geometric knapsack (2D GK) and 2-D vector knapsack problem are W[1]-hard [12, 20], thus do not admit an EPTAS unless W[1]=FPT. However, d -D vector knapsackproblem admits a PTAS [7]. On the other hand, it is not known whether the 2D GK is APX-Hard or not, and finding a PTAS for the problem is one of the major open problems in thearea. The current best approximation algorithm [9] for this problem without rotations achievesan approximation ratio of 17 / ε . If rotations are allowed, the approximation ratio improvesto 3 / ε . In a special case known as the cardinality case where each item has a profit of oneunit, the current best approximation factor is 558 /
325 + ε without rotations and 4 / ε withrotations (see [9]). PTASes exist for the restricted profit density case, i.e., the profit/area ratioof each item is upper bounded and lower bounded by fixed constants (see [2]) and the casewhere all the items are small compared to the dimensions of the bin. EPTAS exists for the casewhere all the items are squares [13]. In the case where we can use resource augmentation , i.e.,we are allowed to increase the width or height of the knapsack by a factor of ε , we can constructa packing which gives the exact optimal profit in polynomial time (see [13]). 2GK has also beenstudied under psedopolynomial setting [17, 11]. The following table summarizes the results for2GK problem. EPTAS is a PTAS with running time f ( ε ) n c where c is a constant that does not depend on ε . / ε [9] 3 / ε [9]Cardinality case 558 /
325 + ε [9] 4 / ε [9]Restricted profit density PTAS[2] PTAS[2]Small items folklore FPTAS (Lemma 15) folklore FPTAS (Lemma 15)Square items EPTAS [13] irrelevantResource Augmentation exact solution[13] exact solution[13]Figure 1: The state-of-the-art for the 2-D geometric knapsack problem.A well-studied problem closely related to the knapsack problem is bin packing. A natural gen-eralization of the bin packing problem is the 2-D geometric bin packing (2GBP) problem whereitems are rectangles and bins are unit squares. The current best asymptotic approximationratio of (1 + ln 1 . ε ) for 2GBP is due to [4]. Another generalization of bin packing is thevector bin packing problem (VBP) where each item has multiple constraints. For the currentbest approximation ratios of VBP, we refer the reader to [3].There exist many other variants of the discussed packing problems such as strip packing (see[8, 10]), maximum independent set of rectangles (see [18]), storage allocation problems [22], etc.For an extensive survey of related packing problems, see [5, 16]. In Section 2, we study the Vector-Max-GAP problem and design a PTAS for it. Section 3describes the (2 + ε ) approximation algorithm for the (2 , d ) Knapsack problem. In Sections 3.1and 3.2, we use a structural result involving corridor decomposition and reduce our problem toa container packing problem . In Sections 3.3 and 3.4 we will discuss how to model the containerpacking problem as an instance of the Vector-Max-GAP problem. Finally, in Section 3.5, weput everything together to obtain the (2 + ε ) approximation algorithm for the (2 , d ) Knapsackproblem. We will formally define the Vector-Max-GAP problem. Let I be a set of n items numbered 1 to n and let M be a set of k machines, where k is a constant. The j th machine has a capacity M j .Each item i ∈ I has a size of s j ( i ), value of val j ( i ) in the j th machine ( j ∈ [ k ]). Additionally,each item i also has a weight w q ( i ) in the q th dimension ( q ∈ [ d ], d is a constant). Assume thatfor all j ∈ [ k ], q ∈ [ d ] and i ∈ [ n ], M j , w q ( i ) , s j ( i ) , val j ( i ) are all non-negative.The objective is to assign a subset of items J ⊆ I to the machines such that for any machine j , the size of all the items assigned to it does not exceed M j . Further, the total weight of theset J in any dimension q ∈ [ d ] must not exceed W q , which is the global weight constraint of thewhole setup in the q th dimension. Respecting these constraints, we would like to maximize thetotal value of the items in J .Formally, let J be the subset of items picked and J j be the items assigned to the j th machine( j ∈ [ k ]). The assignment is feasible iff the following constraints are satisfied: ∀ q ∈ [ d ] , X i ∈ J w q ( i ) ≤ W q (weight constraints)3 j ∈ [ k ] , X i ∈ J j s j ( i ) ≤ M j (size constraints)Let −→ M = [ M , M , . . . , M k ], −→ w ( i ) = [ w ( i ) , w ( i ) , . . . , w d ( i )], −→ s ( i ) = [ s ( i ) , s ( i ) , . . . , s k ( i )], −→ val( i ) = [val ( i ) , val ( i ) , . . . , val k ( i )].Let −→ s = [ −→ s (1) , −→ s (2) , . . . , −→ s ( n )], −→ w = [ −→ w (1) , −→ w (2) , . . . , −→ w ( n )] , −→ val = [ −→ val(1) , −→ val(2) , . . . , −→ val( n )]. −→ W = [ W , W , . . . , W d ].An instance of this problem is given by ( I, −→ val , −→ s , −→ w , −→ M , −→ W ). We say that the set of items J isfeasible for ( −→ s , −→ w , −→ M , −→ W ) iff J can fit in machines of capacity given by −→ M and satisfy the globalweight constraint given by −→ W where item sizes and weights are given by −→ s and −→ w respectively. Consider the case where item sizes and weights are integers. W.l.o.g., we can assume that thecapacities of machines, M j ( j ∈ [ k ]) and weight constraints, W q ( q ∈ [ d ]), are also integers(otherwise, we can round them down to the closest integers).Arbitrarily order the items and number them from 1 onwards. Let VAL( n, −→ M , −→ W ) be the max-imum value obtainable by assigning a subset of the first n items to k machines with capacitiesgiven by −→ M respecting the global weight constraint −→ W . We can express VAL( n, −→ M , −→ W ) as arecurrence.VAL( n, −→ M , −→ W ) = −∞ if ¬ ( −→ W ≥ ∧ −→ M ≥ n = 0max VAL( n − , −→ M , −→ W ) , k max j =1 (cid:16) val j ( n ) + VAL (cid:16) n − , −→ M − −→ s ( n ) · −→ e j , −→ W − −→ w ( n ) (cid:17)(cid:17) elseVAL( n, −→ M , −→ W ) can be computed using dynamic programming. We can find the subset of itemsthat gives this much value and it is also easy to ensure that no item assigned to a machine hasvalue 0 in that machine. There are n Q kj =1 ( M j + 1) Q dq =1 ( W q + 1) items in the state space andeach iteration takes Θ( d + k ) time. Therefore, time taken by the dynamic programming solutionis Θ (cid:16) n ( d + k ) Q kj =1 ( M j + 1) Q dq =1 ( W q + 1) (cid:17) . Let −→ µ = [ µ , µ , . . . , µ k ] and −→ δ = [ δ , δ , . . . , δ d ] be vectors whose values will be decidedlater. For j ∈ [ k ], define s ′ j ( i ) = ⌈ s j ( i ) /µ j ⌉ , M ′ j = ⌊ M j /µ j ⌋ + n . For q ∈ [ d ], define w ′ q ( i ) = ⌈ w q ( i ) /δ q ⌉ , W ′ q = ⌊ W q /δ q ⌋ + n . Lemma 1.
Let J be feasible for ( −→ s , −→ w , −→ M , −→ W ) . Then J is also feasible for ( −→ s ′ , −→ w ′ , −→ M ′ , −→ W ′ ) .Proof. For any dimension q ∈ [ d ], P i ∈ J w ′ q ( i ) = P i ∈ J ⌈ w q ( i ) /δ q ⌉ ≤ P i ∈ J ( ⌊ w q ( i ) /δ q ⌋ + 1). X i ∈ J ( ⌊ w q ( i ) /δ q ⌋ + 1) ≤ | J | + $ (1 /δ q ) X i ∈ J w q ( i ) % ≤ n + ⌊ W q /δ q ⌋ = W ′ q J j be the items in J assigned to the j th machine. Then P i ∈ J j s ′ j ( i ) = P i ∈ J j ⌈ s j ( i ) /µ j ⌉ . X i ∈ J j ( ⌊ s j ( i ) /µ j ⌋ + 1) ≤ | J j | + (1 /µ j ) X i ∈ J j s j ( i ) ≤ n + ⌊ M j /µ j ⌋ = M ′ j Lemma 2.
Let J be feasible for ( −→ s ′ , −→ w ′ , −→ M ′ , −→ W ′ ) . Then J is also feasible for ( −→ s , −→ w , −→ M + n −→ µ , −→ W + n −→ δ ) .Proof. For all q ∈ [ d ] X i ∈ J w q ( i ) ≤ X i ∈ J δ q w ′ q ( i ) ≤ δ q W ′ q = δ q ( ⌊ W q /δ q ⌋ + n ) ≤ W q + nδ q Let J j be the items in J assigned to the j th machine. X i ∈ J j s j ( i ) ≤ X i ∈ J j µ j s ′ j ( i ) ≤ µ j M ′ j = µ j ( ⌊ M j /µ j ⌋ + n ) ≤ M j + nµ j Let µ j = εM j /n and δ q = εW q /n for all q ∈ [ d ] and j ∈ [ k ]. Let J ∗ be the optimal solutionto ( I, −→ val , −→ s , −→ w , −→ M , −→ W ). Let b J be the optimal solution to ( I, −→ val , −→ s ′ , −→ w ′ , −→ M ′ , −→ W ′ ). By Lemma 1,val( b J ) ≥ val( J ∗ ). By Lemma 2, b J is feasible for ( −→ s , −→ w , (1 + ε ) −→ M , (1 + ε ) −→ W ). Also, observe that | M ′ j | ≤ n + M j /µ j = n (1 + 1 /ε ) is a polynomial in n . Similarly, | W ′ q | ≤ n (1 + 1 /ε ) and hencethe optimal solution to ( I, −→ val , −→ s ′ , −→ w ′ , −→ M ′ , −→ W ′ ) can be obtained using the dynamic-programmingalgorithm in polynomial time. Therefore, the optimal solution to ( I, −→ val , −→ s ′ , −→ w ′ , −→ M ′ , −→ W ′ ) can beobtained using the dynamic-programming algorithm in time Θ (cid:16) ( d + k ) n d + k +1 /ε d + k (cid:17) .Let us define a subroutine assign-res-aug ε ( I, −→ val , −→ s , −→ w , −→ M , −→ W ) which takes as input set I withassociated values, −→ val, and gives as output the optimal feasible solution to ( −→ s , −→ w , (1 + ε ) −→ M , (1 + ε ) −→ W ). Consider a set I of items where each item has length s ( i ) and profit p ( i ) (in this subsection, I is an instance of the knapsack problem instead of the Vector-Max-GAP problem).Suppose for all i ∈ I, s ( i ) ∈ (0 , ε ] and s ( I ) ≤ δ . We’ll show that there exists an R ⊆ I such that s ( I − R ) ≤ p ( R ) < ( δ + ε ) p ( I ). We call this technique of removing a low-profitsubset from I so that it fits in a bin of length 1 trimming.Arbitrarily order the items and arrange them linearly in a bin of size 1 + δ . Let k = ⌊ / ( δ + ε ) ⌋ .Create k + 1 intervals of length δ and k intervals of length ε . Place δ -intervals and ε -intervalsalternately. They will fit in the bin because ( k + 1) δ + kε = δ + ( δ + ε ) ⌊ / ( δ + ε ) ⌋ ≤ δ Number the δ -intervals from 0 to k and let S i be the set of items intersecting the i th δ -interval.Note that, all S i are mutually disjoint. Let i ∗ = argmin ki =0 p ( S i ). p ( S i ∗ ) = k min i =0 p ( S i ) ≤ k + 1 k X i =0 p ( S i ) ≤ p ( I ) ⌊ / ( δ + ε ) ⌋ + 1 < ( δ + ε ) p ( I )Removing S i ∗ will create an empty interval of length δ in the bin, and the remaining items canbe shifted so that they fit in a bin of length 1. 5 .4 Packing Small Items Consider a Vector-Max-GAP instance ( I, −→ val , −→ s , −→ w , −→ M , −→ W ). Item i is said to be ε -small for thisinstance iff −→ w ( i ) ≤ ε −→ W and for all j ∈ [ k ] , ( s j ( i ) ≤ εM j or val j ( i ) = 0). A set I of items is saidto be ε -small iff each item in I is ε -small.Suppose I is ε -small. Let J ⊆ I be a feasible solution to ( I, −→ val , −→ s , −→ w , (1 + ε ) −→ M , (1 + ε ) −→ W ). Let J j be the items assigned to the j th machine.For each j ∈ [ k ], use trimming on J j for sizes s j and then for each j ∈ [ d ], use trimming on J for weights w j . In both cases, use ε := ε and δ := ε . Let R be the removed items and J ′ = J − R be the remaining items. Total value lost is less than 2 ε ( d + 1)val( J ) and J ′ isfeasible for ( −→ s , −→ w , −→ M , −→ W ).Therefore, any resource-augmented solution J of small items can be transformed to get a feasiblesolution J ′ of value at most (1 − d + 1) ε )val( J ). Theorem 3.
Let J be a feasible solution to ( I, −→ val , −→ s , −→ w , −→ M , −→ W ) . Let J j ⊆ J be the itemsassigned to the j th machine. Then for all ε > , there exist sets X and Y such that | X | ≤ ( d + k ) / ( ε ) and val( Y ) ≤ ε · val( J ) and ∀ j ∈ [ k ] , ∀ i ∈ J j − X − Y, s j ( i ) ≤ ε ( M j − s j ( X ∩ J j )) ∀ i ∈ J − X − Y, −→ w ( i ) ≤ ε (cid:16) −→ W − −→ w ( X ) (cid:17) Proof.
Let P ,j = { i ∈ J j : s j ( i ) > εM j } , where j ∈ [ k ], and Q ,q = { i ∈ J : w q ( i ) > εW q } ,where q ∈ [ d ]. We know that s j ( P ,j ) > εM j | P ,j | . Therefore, | P ,j | ≤ ε . Also, w q ( Q ,q ) >εW q | Q ,q | . Therefore, | W ,q | ≤ ε . Let R = (cid:16)S kj =1 P ,j (cid:17) ∪ (cid:16)S dq =1 Q ,q (cid:17) . R is therefore the setof items in J that are in some sense ‘big’. Note that | R | ≤ ( d + k ) /ε .If val( R ) ≤ ε · val( J ), set Y = R and X = {} and we’re done. Otherwise, set P ,j = { i ∈ J j − R : s j ( i ) > ε ( M j − s j ( R ∩ J j )) } , Q ,q = { i ∈ J − R : w q ( i ) > ε ( W q − w q ( R )) } , and R = (cid:16)S kj =1 P ,j (cid:17) ∪ (cid:16)S dq =1 Q ,q (cid:17) . If val( R ) ≤ ε · val( J ), set Y = R and X = R and we’redone. Otherwise, set P ,j = { i ∈ J j − R − R : s j ( i ) > ε ( M j − s j (( R ∪ R ) ∩ J j )) } , Q ,q = { i ∈ J − R − R : w q ( i ) > ε ( W q − w q ( R ) − w q ( R )) } , and R = (cid:16)S kj =1 P ,j (cid:17) ∪ (cid:16)S dq =1 Q ,q (cid:17) . Ifval( R ) ≤ ε · val( J ), set Y = R and X = R ∪ R and we’re done. Otherwise, similarly compute R and check if val( R ) ≤ ε · val( J ), and so on. Extending the similar arguments about | R | , itfollows that for all t > | R t | ≤ ( d + k ) /ε .Since every R t ( t >
0) is disjoint, there will be some R T such that val( R T ) ≤ ε · val( J ).Now set Y = R T and X = R ∪ . . . ∪ R T − . We can see that | X | ≤ P Tt =1 | R T | ≤ T ( d + k ) /ε ≤ ( d + k ) /ε and val( Y ) = val( R T ) ≤ ε · val( J ). Note that all items in J − X − Y are small becauseof the way R T was constructed. Hence it follows that ∀ j ∈ [ k ] , ∀ i ∈ J j − X − Y, s j ( i ) ≤ ε ( M j − s j ( X ∩ J j )) ∀ i ∈ J − X − Y, −→ w ( i ) ≤ ε (cid:16) −→ W − −→ w ( X ) (cid:17) .6 PTAS for Vector-Max-GAP Let J ∗ be an optimal assignment for ( I, −→ val , −→ s , −→ w , −→ M , −→ W ). Let J ∗ j ⊆ J ∗ be the items assignedto the j th machine.By Theorem 3, J ∗ can be partitioned into sets X ∗ , Y ∗ and Z ∗ such that | X ∗ | ≤ d + kε andval( Y ∗ ) ≤ ε · val( J ∗ ). Let −→ W ∗ = −→ W − −→ w ( X ∗ ) and M ∗ j = M j − s j ( X ∗ ∩ J ∗ j ). Then Z ∗ is ε -smallfor ( −→ val , −→ s , −→ w , −→ M ∗ , −→ W ∗ ).For a set S , define Π k ( S ) as the set of k -partitions of S . Then the pseudo code in Algorithm 1provides a PTAS for the Vector-Max-GAP problem. Algorithm 1
Vector-Max-GAP( I, −→ p , −→ s , −→ w , −→ M , −→ W ): PTAS for Vector Max GAP J best = {} . for X ⊆ I such that | X | ≤ ( d + k ) /ε dofor ( X , X , . . . , X k ) ∈ Π k ( X ) do −→ W ′ = −→ W − −→ w ( X ) M ′ j = M j − s j ( X j ) for each j ∈ [ k ].val ′ j ( i ) = ( val j ( i ) if s j ( i ) ≤ εM ′ j i ∈ I − X . ⊲ I − X is ε -small for ( −→ val ′ , −→ s , −→ w , −→ M ′ , −→ W ′ ) Z ′ = assign-res-aug ε ( I − X, −→ val ′ , −→ s , −→ w , −→ M ′ , −→ W ′ ).Trim Z ′ to get Z so that Z is feasible for ( −→ s , −→ w , −→ M ′ , −→ W ′ ). J = X ∪ Z if val( J ) > val( J best ) then J best = J end ifend forend forreturn J best Correctness : Since Z is feasible for ( −→ s , −→ w , −→ M ′ , −→ W ′ ), X ∪ Z is feasible for ( −→ s , −→ w , −→ M , −→ W ). Approximation guarantee :For some iteration of Algorithm 1, X = X ∗ and X j = X ∗ ∩ J ∗ j . When that happens, −→ W ′ = −→ W ∗ and −→ M ′ = −→ M ∗ . Let val ∗ be the maximum value ε -small assignment of items to the machineswith capacities given by −→ M ′ and over all weight constraints −→ W ′ . Therefore, val ∗ ≥ val( Z ∗ ).To try to find an optimal assignment of small items, we’ll forbid non-small items to be assignedto a machine. To do this, if for item i , s j ( i ) > εM ′ j , set val j ( i ) to 0. Using our resource-augmented Vector-Max-GAP algorithm, we get val( Z ′ ) ≥ val ∗ . By the property of trimming,val( Z ) ≥ (1 − d + 1) ε )val( Z ′ ).val( J best ) ≥ val( X ∗ )+ val( Z ) ≥ val( X ∗ )+ (1 − d + 1) ε )val( Z ∗ ) ≥ (1 − d + 1) ε )(1 − ε )val( J ∗ )This gives us a (1 − (2 d + 3) ε )-approx solution. The running time can be easily seen to bepolynomial as assign-res-aug runs in polynomial time and the number of iterations of the outerloop in Algorithm 1 is polynomial in n and for one iteration of the outer loop, the inner loopruns at most constant number of times. 7 Algorithm for (2, d ) Knapsack Problem In this section, we will obtain a (2+ ε )-approximation algorithm for the (2, d ) knapsack problemboth with and without rotations. First, we show the algorithm for the case without rotations.An algorithm for the case rotations are allowed is similar except for some small changes.Let I be a set of n (2, d )-dimensional items. We are given a (2, d )-dimensional knapsack and wewould like to pack a high profit subset of I in the knapsack. Let us denote this optimal profitby OPT GVKS ( I ). Let −→ w = [ w (1) , . . . , w ( n )], −→ h = [ h (1) , . . . , h ( n )], −→ p = [ p (1) , . . . , p ( n )]. For anitem i ∈ I , let −→ v ( i ) = [ v ( i ) , . . . , v d ( i )] and let −→ v = [ −→ v (1) , . . . , −→ v ( n )].In the whole section, a container is a rectangular region inside the knapsack. For our purposes,every container can be one of the four types: large, wide, tall, area . A large container cancontain at most one item. An area container can only contain items that are ε -small for thecontainer i.e. an item can be packed into an area container of width w and height h only if theitem has width at most εw and height at most εh . In a wide (resp. tall) container, items mustbe stacked up one on top of another (resp. arranged side by side). We also require that thecontainers do not overlap amongst themselves and no item partially overlaps with any of thecontainers.We also use the notation O ε (1) in place of O ( f ( ε )), where ε is an arbitrary constant and f ( · ) isa function which solely depends on the value of ε , when we do not explicitly define what f ( · )is. Similarly, O ε ,ε (1) represents O ( f ( ε , ε )) and so on. Consider a set of items S that are packed in a knapsack. We now state a structural result,inspired by [9], where only a subset of items S ′ ⊆ S is packed into the knapsack. We may losesome profit but the packing has a nice structure which can be searched for, efficiently. Theorem 4.
Let S denote a set of items that can be feasibly packed into a knapsack and let <ε < be any small constant. Then there exists a subset S ′ ⊆ S such that p ( S ′ ) ≥ (1 / − ε ) · p ( S ) .The items in S ′ are packed into the knapsack in containers. Further, the number of containersformed is O ε (1) and their widths and heights belong to a set whose cardinality is poly( | S | ) andmoreover, this set can be computed in time poly( | S | ) . Now, let us go back to our original problem instance I . Let I OPT be the set of items packedinto the knapsack in an optimal packing P .Let us apply Theorem 4 to the set of items I OPT with ε := ε struct ( ε struct will be defined later).Let I ′ OPT be the resulting analog of S ′ in the theorem (there can be many candidates for I ′ OPT but let us pick one). Therefore, p ( I ′ OPT ) ≥ ( 12 − ε struct ) · p ( I OPT ) = ( 12 − ε struct ) · OPT
GVKS ( I ) (1) In this subsection, we will prove Theorem 4.The strategy to prove the theorem is to use the corridor decomposition scheme, essentially takenfrom [1]. First, we assume that we can remove O /ε (1) number of items at the cost of zero profitloss from the originally packed items. Under this assumption, we show that we lose at mosthalf profit by our restructuring. Finally, we show how to get rid of this assumption by usingshifting argumentations. 8 emoving Medium items : Let ε small , ε big be two fixed constants such that ε big > ε small . Wepartition the items in S based on the values of ε small and ε big as follows: • S S = { i ∈ S : w ( i ) ≤ ε small and h ( i ) ≤ ε small } • S B = { i ∈ S : w ( i ) > ε big and h ( i ) > ε big } • S W = { i ∈ S : w ( i ) > ε big and h ( i ) ≤ ε small } • S T = { i ∈ S : w ( i ) ≤ ε small and h ( i ) > ε big } • S med = { i ∈ S : w ( i ) > ε small and w ( i ) ≤ ε big OR h ( i ) ≤ ε big and h ( i ) > ε small } We call the items in S S as small items as they are small in both the dimensions. Similarly, wecall the items in S B , S W , S T , S med as big, wide, tall, medium respectively.By standard arguments, it is possible to choose the constants ε small and ε big such that the totalprofit of all the medium items is at most ε · p ( S ). Hence, we can discard the items in S med from S while losing a very small profit. We omit the proof as it is present in many articles onpacking (see, for example, [9]). First, let us define what a subcorridor is. A subcorridor is just a rectangle in the 2D coordinatesystem with one side longer than ε big and the other side having a length of at most ε big . Asubcorridor is called wide (resp. tall) if the longer side is parallel to the x -axis (resp. y -axis).A corridor is just a subcorridor or a union of at most 1 /ε subcorridors such that each wide(resp. tall) subcorridor overlaps with exactly two tall (resp. wide) subcorridors, except for atmost two subcorridors which are called the ends of the corridors and can overlap with exactlyone subcorridor. The overlap between any two subcorridors should be in such a way that oneof their corners coincide. For now, let’s consider a generic packing of a set of items S that can contain big, small, wide,tall items.Since the number of big items packed can be at most a constant, and since we assumed that wecan remove a constant number of items at no profit loss, let us remove all the big items fromthe packing. Let’s also get rid of the small items for now (we will pack the small items later).Hence, we are left with wide and tall items packed in the knapsack. Let’s name these items as skewed items and denote the set of these items by S skew . Lemma 5 (Corridor Packing lemma [1]) . There exist non-overlapping corridor regions in theknapsack such that we can partition S skew into sets S corr , S nicecross , S badcross such that • (cid:12)(cid:12) S nicecross (cid:12)(cid:12) = O /ε (1) • p ( S badcross ) ≤ O ε (1) · p ( S skew ) • Every item in S corr is fully contained in one of the corridors. The number of corridors is O /ε, /ε big (1) and in each corridor, the number of subcorridors is at most /ε . • Each subcorridor has length at least ε big and breadth less than ε big (assuming length todenote the longest side and breadth to denote the smallest side). emark 6. The total number of subcorridors after the corridor partition is O /ε, /ε big (1) . We also remove items in S nicecross since they are at most a constant in number and items in S badcross since their total profit is very less.The last point of the lemma ensures that a skewed item is completely contained in at most onesubcorridor. Hence for every skewed item contained in a corridor, we can tell which subcorridorit belongs to. The last point also ensures that, there can not be a subcorridor which completelycontains both wide and tall items. This fact allows us to label each subcorridor as wide ortall: If a subcorridor contains only wide (resp. tall) items, we say that it is a wide (resp. tall)subcorridor. Removing either wide or tall items : Now, we will simplify the above structure of skeweditems while losing at most half of the original profit.Assume w.l.o.g. that the total profit of wide items is at least as much as the total profit of thetall items. Hence, if we get rid of the tall items, we lose at most half of the original profit. Withthis step, we can also remove all the tall subcorridors since they are all empty. We are left withwide items packed in wide subcorridors. Since the subcorridors are just rectangles and they nolonger overlap, we just call these subcorridors as boxes .Next, we will describe how to reorganize the items in these boxes into containers at a verymarginal profit loss.
Note that the boxes contain only wide items. And since the width of the wide items is lowerbounded by ε big , we can create an empty wide strip in each box at the loss of very small profitand at most a constant number of items. This is described in the following lemma. Lemma 7.
Let B be a box of dimensions a × b such that each item contained in it has a widthof at least µ , where µ is a constant. Let the total profit packed inside the item be P and let ε box be a small constant. Then, it is possible to pack all the items barring a set of items of profit atmost ε box · P and a set of O /µ (1) number of items into a box of dimensions a × (1 − ε box ) b .Proof. Assume the height of the box is along the y -axis and width of the box is along the x -axisand say, the bottom left corner of the box is situated at the origin. Draw lines at y = y i for i ∈ { , . . . , ⌊ /ε box ⌋} such that y i = i · ε box b . These lines will partition the box into ⌈ /ε box ⌉ regions. For all i ∈ { , . . . , ⌈ /ε box ⌉} , let s i be the set of items completely contained in the i th region. Then, there must exist some region j such that p ( s j ) ≤ (1 / ⌈ /ε box ⌉ ) · P . We willremove all the items in s j . Also, the number of items partially overlapping with the region j can be at most 2 /µ , which is O /µ (1). By removing these items, we create an empty strip ofsize a × ε box b and hence the remaining items can be packed in a box of size a × (1 − ε box ) b .We apply the above lemma to each and every box with small enough ε box and µ = ε big . Considera box B and let S box be the items packed inside B . Also, let a × b be the dimensions of the box.Let S ′ box be the set of items in the knapsack after performing the steps in Lemma 7. S ′ box canbe packed within the box a × (1 − ε box ) b and we are left with some empty space in the verticaldirection. Hence, we can apply the resource augmentation lemma, which is taken from [9]. Lemma 8 (Resource Augmentation Lemma) . Let I be a set of items packed inside a box ofdimensions a × b and let ε ra be a given constant. Then there exists a container packing of a set I ′ ⊆ I inside a box of size a × (1 + ε ra ) b such that p ( I ′ ) ≥ (1 − O ε ra (1)) · p ( I ) • The number of containers is O /ε ra (1) and the dimensions of the containers belong to aset of cardinality O (cid:16) | I | O /ε ra (1) (cid:17) . • The total area of the containers is at most a ( I ) + ε ra · ab Applying the above lemma to the box B , we obtain a packing where S ′ box is packed into box ofsize a × (1 − ε box )(1 + ε ra ) b . We will choose ε box , ε ra such that the product (1 − ε box )(1 + ε ra ) < (1 − ε ). Lemma 9.
The total area of containers obtained is at most min { (1 − ε ) , a ( S corr ) + ε ra } Proof.
The second upper bound directly follows from the last point of Lemma 8: Area of thecontainers in a box B of dimensions a × b is at most a ( S box ) + ε ra · ab . Summing over all boxeswe get the bound a ( S corr ) + ε ra .For the first upper bound, observe that every box of dimensions a × b is shrunk into a box ofdimensions a × (1 − ε ) b . Since all the original boxes were packable into the knapsack, the totalarea of the boxes (and hence the containers) after shrinking is at most (1 − ε ) times the areaof the knapsack itself, which is 1.To this end, we have completed packing all the items into containers (except the constantnumber of discarded items and the small items). We will analyze the number of containerscreated. A box was partitioned into O /ε ra (1) containers as given by Lemma 8 and boxes arenothing but subcorridors whose number is O /ε, /ε big (1) by Remark 6. Hence the total numberof containers formed is O /ε, /ε big , /ε ra (1). This fact is going to be crucial in order to pack thesmall items. In this subsection, we show how to pack small items that have been removed from the packingtemporarily. Let the set of small items be denoted by S small . Let ε grid = 1 / ⌈ ε/ε small ⌉ .Let us define a uniform grid G in the knapsack where grid lines are equally spaced at a distanceof ε grid . It is easy to see that the number of grid cells formed is at most 1 /ε which is at mosta constant. We mark a grid cell as free if it has no overlapping with any container and non-free otherwise. We delete all the non-free cells and the free cells will serve as the area containersthat we use to pack the small items. Lemma 10.
For some choice of ε small and ε ra , the total area of non-free cells is at most min (cid:8) (1 − ε ) , a ( S corr ) + 3 ε (cid:9) Proof.
Let A be the total area of containers and k be the number of containers. The total areaof cells completely covered by containers is at most A . The partially overlapped cells are thecells that intersect the edges of the containers. Since, the number of containers is k and eachcontainer has 4 edges and each edge can overlap with at most 1 /ε grid number of cells, the areaof completely and partially overlapped cells is at most A + 4 k · ε grid · ε = A + 4 k · ⌈ ε/ε small ⌉
11s we noted before, k = O /ε, /ε big , /ε ra (1) which doesn’t depend on ε small in some sense. Hence,we can choose a very small ε small (when compared to ε, ε big , ε ra ), and ensure that the abovequantity is at most A + 2 ε .By Lemma 9, the value of A is bounded by min { (1 − ε ) , a ( S corr ) + ε ra } . Hence, the total area of deleted cells is at most min (cid:8) (1 − ε + 2 ε ) , a ( S corr ) + ε ra + 2 ε (cid:9) . For ε < / ε ra < ε , we get the desired result.We now show that there is a significant area of free cells left to pack the small items profitably. Lemma 11.
The area of free cells is at least (1 − ε ) a ( S small ) Proof.
Since S small and S corr were packable in the knapsack, a ( S small ∪ S corr ) ≤ a ( S small ) ≥ ε , then the area of free cells is at least 1 − a ( S corr ) − ε ≥ a ( S small ) − ε ≥ (1 − ε ) a ( S small )On the other hand, if a ( S small ) < ε , then the area of free cells is at least ε which in turn is atleast a ( S small ).Each free cell has dimensions ε grid × ε grid and a small item has each side at most ε small . Hence,all small items are ε -small for the created free cells and these can be used to pack the smallitems. Using NFDH, we can pack a very high profit in the area containers as described in thefollowing lemma. Lemma 12.
Let I be a set of items and let there be k identical area containers such that eachitem i ∈ I is ε -small for every container and the whole set I can be packed in these containers.Then there exists an algorithm which packs a subset I ′ ⊆ I in these area containers such that p ( I ′ ) ≥ (1 − ε ) p ( I ) .Proof. W.l.o.g., assume that each container has dimensions 1 ×
1. So, we can assume thatevery item i ∈ I has a width of at most ε and a height of at most ε . Order the items in I innon-increasing ratio of profit/area (we call this profit density) breaking ties arbitrarily and alsoorder the containers arbitrarily.Start packing items into the containers using NFDH. If we are able to pack all the items in I ,we are done. Otherwise, consider an arbitrary container C . Let I C be the items we packedin C and let i C be the item we could not pack in C . Then by Lemma 15, a ( I C ∪ { i C } ) > (1 − ε ) a ( C ) = (1 − ε ) . But since a ( i C ) ≤ ε , a ( I C ) > − ε . Hence, we have packed at least(1 − ε ) fraction of the total area of the containers with the densest items and thus the claimfollows.To this end, we have packed at least (1 / − O ε (1)) fraction of the total profit in the originalpacking assuming that we can leave out a constant number of items at no cost. We assumed that we can drop O /ε (1) number of items from S at no profit loss. But this maynot be true because they can carry significant amount of profit in the original packing. Theleft out constant number of items are precisely the big items and items in S nicecross and the itemspartially overlapping with the removed strip in Lemma 7.The main idea is to fix some items in the knapsack and then carry out our algorithm discussedin the previous sections with some modifications. Again, we may leave out some items with12igh profit. We fix these items too and repeat this process. We can argue that at some point,the left out items have a very less profit and hence this process will end.Let us define the set K (0) as the set of items that were removed in the above process. If thetotal profit of K (0) is at most ε · p ( S ), then we are done. If this is not the case, then we useshifting argumentation.Assume that we completed the t th iteration for some t > p ( K ( r )) > ε · p ( S ) for all0 ≤ r ≤ t . Let K ( t ) = S tr =0 K ( t ). We will argue that for every r ≤ t , | K ( r ) | is at most aconstant. Then |K ( t ) | is also at most a constant if t < ⌈ /ε ⌉ (in fact, we will argue that t willnot go beyond ⌊ /ε ⌋ ).Let us define a non-uniform grid G ( t ) induced by the set K ( t ) as follows: The x and y coordinatesof the grid are given by all the corners of the items in K ( t ). Note that the number of horizontal(resp. vertical) grid lines is bounded by 2 |K ( t ) | . This grid partitions the knapsack into a set ofcells C ( t ). Since |K ( t ) | is at most a constant, the number of grid cells created is also at most aconstant.Let us denote an arbitrary grid cell by C and the items in S which intersect C and which arenot in K ( t ) by S ( C ). For an item i in S ( C ), let h ( i ∩ C ) and w ( i ∩ C ) denote the width andheight of the overlap of an item i with C . We categorize the items in S ( C ) relative to C asfollows. • S small ( C ) = { i ∈ S ( C ) : h ( i ∩ C ) ≤ ε small h ( C ) and w ( i ∩ C ) ≤ ε small w ( C ) } • S big ( C ) = { i ∈ S ( C ) : h ( i ∩ C ) > ε big h ( C ) and w ( i ∩ C ) > ε big w ( C ) } • S wide ( C ) = { i ∈ S ( C ) : h ( i ∩ C ) ≤ ε small h ( C ) and w ( i ∩ C ) > ε big w ( C ) } • S tall ( C ) = { i ∈ S ( C ) : h ( i ∩ C ) > ε big h ( C ) and w ( i ∩ C ) ≤ ε small w ( C ) } We call an item i small if it is not in S big ( C ) ∪ S wide ( C ) ∪ S tall ( C ) for some cell C . We callan item i big if it is in S big ( C ) for some cell C . We call an item i wide (resp. tall ) if it is in S wide ( C ) (resp. S tall ( C )) for some cell C .We call an item i medium if there is a cell C such that h ( i ∩ C ) ∈ ( ε small h ( C ) , ε big h ( C )] or w ( i ∩ C ) ∈ ( ε small w ( C ) , ε big w ( C )].It is easy to observe that an item must belong to exactly one of small, big, wide, tall, mediumcategories. Note that the width and height of a small item are at most ε small . We call an item i skewed if it is either wide or tall.Again by standard arguments, we can select ε small and ε big such that the profit of the mediumitems is at most ε · p ( S ). Hence, we remove all the medium items. We also remove all the smallitems for now, but we will pack them at the end exactly as in Section 3.2.4. We add all thebig items to K ( t + 1). We can do this because the big items are at most constant in number:Consider any cell C . The number of big items associated with it is at most a constant and thenumber of cells themselves is at most a constant.We create a corridor decomposition in the knapsack with respect to the skewed items as follows:First we transform this non-uniform grid into a uniform grid by moving the grid lines andsimultaneously stretching or compressing the items. This transformation is to ensure thatevery wide (resp. tall) item associated with a cell C has at least a width (resp. height) of ε big / (1 + 2 |K ( t ) | ). We apply Lemma 5 on the set of skewed items into a set of O /ε, /ε big (1)corridors. Let S corr , S nicecross , S badcross be the sets obtained similar to Lemma 5. The set S corr is theset of items that are packed in the corridors completely. We add the set S nicecross to K ( t + 1) sincethey are constant in number. As the set S badcross has a very small profit, we discard them. We13lso remove all the items in tall subcorridors assuming, w.l.o.g., that their profit is at most thatof the items in wide subcorridors.Now, we have the items in K ( t ) fixed inside the knapsack and the items in boxes (obtainedafter deleting the tall items and hence deleting the tall subcorridors). We would like to splitthe boxes into containers as in Section 3.2.3 but there is an issue: There can be items in K ( t )which overlap with the boundaries of the boxes. But these are at most a constant in numberand hence we can resolve this issue in the following way.Consider an item i in K ( t ) partially overlapping with a box. We extend each of its horizontaledge that is inside the box in both the directions till the ends of the box. This extended edge and i divide the items in the box into at most five categories: The items intersecting the extendededge, the items to the left of i , the items to the right of i , the items above i and the itemsbelow i . We note that the items in the first part are at most a constant in number and henceadd them to K ( t + 1). Thus i splits the box into at most four smaller boxes. We repeat thisprocess for all the items in K ( t ) partially overlapping with the box. We obtain smaller boxesbut with the required property that there are no partially overlapping items in K ( t ) with theboxes. This is depicted in Figure 2.We apply Lemma 7 to these smaller boxes and while doing this, we add all the items partiallyoverlapping with the removed strip to K ( t + 1). Then, we apply Lemma 8 to these new smallerboxes to split them into containers as in Section 3.2.3. At this point, the ( t + 1) th round endsand we look at the set K ( t + 1). If p ( K ( t + 1)) ≤ ε · p ( S ), we end; otherwise we continue toround t + 2. We can argue that the number of rounds are at most 1 /ε : K ( r ) and K ( r + 1) aredisjoint for all r ≥
0. Hence, for some r < ⌈ /ε ⌉ , we can guarantee that p ( K ( r )) is at most ε · p ( S ).Therefore, after the r th round, we end the process and we add the small items to the packingas described in Section 3.2.4.In [9], it is also shown how to modify the formed containers so that their widths and heightscome from a set of cardinality poly( | S | ). The basic idea is as follows. First note that a largecontainer contains only one item. Hence, the size of the container can be shrunk to that of theitem. Now, consider a wide container C W . Items are lying one on top of another inside C . So, h ( C W ) can be expressed as the sum of heights of all the items. Also, w ( c W ) can be expressed asthe maximum among all the widths of items in the container. If there are at most 1 /ε numberof items in C W , then h ( C W ) can belong to a set containing at most n /ε values. If C W hasmore than 1 /ε number of items, then we consider the tallest 1 /ε number of items in C W . Letthis set be T . There must certainly exist an item i ∈ T whose profit is at most εp ( C W ). Weremove the item i from the container and readjust the height of C W so that its height belongsto a polynomial set of values. Now consider an area container C A . Since all items in C A are ε -small with respect to the dimensions of C A , we can leave out a subset of items carrying avery marginal profit and create some space in both the horizontal and vertical directions so thatthe width and height of C A can be readjusted. For full proof one can refer to [9]. This provesTheorem 4. From now on, our main goal would be to derive an algorithm to construct the nice structureas in Theorem 4. We first formally define the problem of packing items into containers whichwe call the container packing problem . We then model the container packing problem as aninstance of the Vector-Max-GAP problem. Since we have a PTAS for the Vector-Max-GAPproblem, it follows that there exists a PTAS for the container packing problem.14igure 2: Division of box into smaller boxes: The dark item is the item in K ( t ) that overlapswith the box. The dashed lines are the extended edges and the grey items are those that willbe included in K ( t + 1)Let I be a set of items and let −→ w , −→ h , −→ p , −→ v denote the associated widths, heights, profits andweights respectively.Let C be a given set of containers such that the number of containers is constant. Out of C ,let C A , C H , C V , C L denote area, wide, tall and large containers respectively.In the Container Packing Problem, we would like to pack a subset of I into these containerssuch that • A large container can either be empty or can contain exactly one item. • In a wide (resp. tall) container, all the items must be stacked up one on top of another(resp. arranged side by side). • The total area of items packed in an area container must not exceed (1 − ε ′ ) times thearea of the container itself (assume that ε ′ is a constant given as a part of the input). • The total weight of items packed in all the containers in any dimension j ∈ { , . . . , d } should not exceed one.We denote an instance of the container packing problem by the tuple ( I, −→ w , −→ h , −→ p , −→ v , C, ε ′ ).Now, let us define the Vector-Max-GAP problem. Let I be a set of n items and let us say, wehave k machines such that the j th machine has a capacity M j . An item i has a size of s j ( i ),value of val j ( i ) in the j th machine and weight w q ( i ) in the q th dimension. The objective is toassign a maximum value subset of items I ′ ⊆ I , each item to a machine, such that the total sizeof items in a machine does not exceed the capacity of that machine. We also require that thetotal weight of I ′ does not exceed W q (a non-negative real) in any dimension q ∈ [ d ].Let −→ M = [ M , M , . . . , M k ], −→ W = [ W , W , . . . , W d ], −→ w ( i ) = [ w ( i ) , w ( i ) , . . . , w d ( i )], −→ s ( i ) =[ s ( i ) , s ( i ) , . . . , s k ( i )], −→ val( i ) = [val ( i ) , val ( i ) , . . . , val k ( i )].Also, let −→ s = [ −→ s (1) , −→ s (2) , . . . , −→ s ( n )], −→ w = [ −→ w (1) , −→ w (2) , . . . , −→ w ( n )] , −→ val = [ −→ val(1) , −→ val(2) , . . . , −→ val( n )].We denote an instance of Vector-Max-GAP by ( I, −→ val , −→ s , −→ w , −→ M , −→ W ). Let C = ( I, −→ w , −→ h , −→ p , −→ v , C, ε ′ ) denote an instance of the container packing problem. In thissubsection, we show how to reduce C to an instance G = ( I, −→ val , −→ s , −→ w , −→ M , −→ W ) of the Vector-Max-GAP problem.We retain I from C . Since we have unit vector constraints over all the containers combinedin the container packing problem, we initialize −→ W to be the vector of all ones. We choose thenumber of machines in G to be same as the number of containers in C . Let | C | = k and | I | = n .15he vectors −→ val , −→ s , −→ M are defined as follows: s j ( i ) = C j is a large container and i can fit in inside C j ∞ if i can not fit in inside C j h ( i ) if C j is a wide container and i fits in inside C j w ( i ) if C j is a tall container and i fits in inside C j w ( i ) h ( i ) if C j is an area container and i is ε ′ -small for C j ∞ if C j is an area container but i is not ε ′ -small for C j M j = C j is a large container h ( C j ) if C j is a wide container w ( C j ) if C j is a tall container(1 − ε ′ ) h ( C j ) w ( C j ) if C j is an area containerval j ( i ) = p ( i ) w q ( i ) = v q ( i )In the above definitions of s j ( i ) , M j , val j ( i ) and w q ( i ), i varies from 1 to n , j varies from 1 to k and q varies from 1 to d .Let I ′ denote a subset of I packed in a feasible solution to C . Then I ′ can also be feasibly assignedto the machines of our Vector-Max-GAP problem: Just assign all the items in a container C j to the j th machine. • If C j is a large container, then the only item packed into it has size 1 in machine M j andcapacity of M j is also 1 and hence assigning this item to the j th machine is feasible. • If C j is a wide (resp. tall) container, the items packed in C j are wide and stacked up(resp. tall and arranged side by side). Hence their total height (resp. width), which isthe total size of items assigned to the j th machine, does not exceed the total height (resp.width) of the container, which is the capacity of the j th machine. • If C j is an area container, the total area of items packed in B j does not exceed (1 − ε ′ ) · a ( C j ) which yields that the total size of items assigned to the j th machine does not exceedthe capacity of the j th machine.The total weight of all the items assigned to machines does not exceed −→ W (which is equal to theall ones vector) as we did not change −→ v while reducing C to G . This proves that the optimalvalue obtainable for G is at least as much as that of the container packing problem C .On the other hand, consider any feasible assignment of a subset of items J ⊆ I to the machinesin our instance G . Let J j be the subset of items assigned to the j th machine. We can pack J j into container C j in the following way: Since the assignment is feasible, the size of all itemsin J j does not exceed the capacity of M j . If C j is wide (resp. tall), P i ∈ J j h ( i ) ≤ h ( C j ) (resp. P i ∈ J j w ( i ) ≤ w ( C j )). Hence, all the items in J j can be stacked up (resp. arranged side by side)inside the container C j . If C j is an area container, then J j consists of only small items whichare ε ′ -small for C j and a ( J j ) ≤ (1 − ε ′ ) · a ( C j ). Hence, by Lemma 15, we can pack the set J j into C j using NFDH. If an item is assigned to a large container, then it occupies the wholecapacity (since item size and machine capacity are both equal to 1) and hence, only a singleitem can be packed in a large container.The above arguments prove that the container packing problem is a special instance of theVector-Max-GAP problem. Hence, we can use the PTAS for the Vector-Max-GAP problemfrom Section 2 to obtain a PTAS for the container packing problem.16he reduction in the case when rotations are allowed is exactly the same except for the valuesof s j ( i ) in the reduction of Container Packing Problem to an instance of the Vector-Max-GAPproblem: If C j is a tall (resp. wide) container, s j ( i ) = ∞ if i can fit neither with rotation nor without rotation w ( i ) (resp . h ( i )) if i fits without rotation but not with rotation h ( i ) (resp . w ( i )) if i fits with rotation but not without rotationmin { w ( i ) , h ( i ) } if i fits both with and without rotationIf C j is a large container, we set s j ( i ) = ∞ if i does not fit in C j with or without rotations.Otherwise we set s j ( i ) to 1. In case of C j being an area container, s j ( i ) is same as the casewithout rotations. The correctness of the reduction follows by similar arguments as above. Theorem 13.
There exists a PTAS for the container packing problem with or without rotations.
Our main goal is to search for the container packing structure in Theorem 4.For this we need to guess the set of containers used to pack the items in I ′ opt of (1). As mentionedin Section 3.2, the number of containers used is at most a constant (let this number be denotedby c ) and they have to be picked from a set whose cardinality is in poly( | I | ) (let this cardinalitybe denoted by q ( | I | )). Therefore, the number of guesses required is (cid:0) q ( | I | )+ cc (cid:1) which is in turn inpoly( | I | ).Once we have the containers, we need to guess which of them are large containers, wide con-tainers, tall containers and area containers. The number of guesses required is at most (cid:0) c +4 c (cid:1) which is a constant.Then we use the PTAS of the container packing problem with approximation factor (1 − ε cont ),and since the optimal profit of the container packing problem is at least (1 / − ε struct ) · OPT
GVKS ( I ), by choosing ε cont := ε and ε struct := ε/
2, we get the desired approximationfactor of (1 / − ε ) for the (2, d ) knapsack problem. We study the (2 , d ) Knapsack problem and design a (2 + ε ) approximation algorithm usingcorridor decomposition and the PTAS for Vector-Max-GAP problem, which could be of inde-pendent interest. We believe that the approximation ratio might be improved to 1 .
89, however,it will require involved applications of complicated techniques like L -packings as described in [9].One might also be interested in studying a further generalization of the problem, the ( d G , d V )Knapsack where items are d G dimensional hyper-rectangles with weights in d V dimensions.One major application of our result will be in designing approximation algorithms for (2 , d ) BinPacking problem, a problem of high practical relevance. In [19], we use the presented approxi-mation algorithm for (2 , d ) Knapsack as a subroutine to obtain an approximation algorithm forthe (2 , d ) Bin Packing problem. References [1] Anna Adamaszek and Andreas Wiese. A quasi-PTAS for the two-dimensional geometricknapsack problem. In
SODA , pages 1491–1505, 2015.172] Nikhil Bansal, Alberto Caprara, Klaus Jansen, Lars Pr¨adel, and Maxim Sviridenko. Astructural lemma in 2-dimensional packing, and its implications on approximability. In
ISAAC , pages 77–86, 2009.[3] Nikhil Bansal, Marek Eli´as, and Arindam Khan. Improved approximation for vector binpacking. In
SODA , pages 1561–1579, 2016.[4] Nikhil Bansal and Arindam Khan. Improved approximation algorithm for two-dimensionalbin packing. In
SODA , pages 13–25, 2014.[5] Henrik I. Christensen, Arindam Khan, Sebastian Pokutta, and Prasad Tetali. Approxima-tion and online algorithms for multidimensional bin packing: A survey.
Computer ScienceReview , 24:63–79, 2017.[6] Edward G. Coffman, Jr, Michael R. Garey, David S. Johnson, and Robert E. Tarjan.Performance bounds for level-oriented two-dimensional packing algorithms.
SIAM Journalon Computing , 9:808–826, 1980.[7] A. M. Frieze and M. R. B. Clarke. Approximation algorithms for the m -dimensional 0 − EJOR , 15:100–109, 1984.[8] Waldo G´alvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan,and Malin Rau. A tight (3/2+ ε ) approximation for skewed strip packing. In AP-PROX/RANDOM , volume 176, pages 44:1–44:18, 2020.[9] Waldo G´alvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala, Arindam Khan, andAndreas Wiese. Approximating geometric knapsack via l-packings. In
FOCS , pages 260–271, 2017.[10] Waldo G´alvez, Fabrizio Grandoni, Salvatore Ingala, and Arindam Khan. Improved pseudo-polynomial-time approximation for strip packing. In
FSTTCS , pages 9:1–9:14, 2016.[11] Waldo Galvez, Fabrizio Grandoni, Arindam Khan, Diego Ramirez-Romero, and AndreasWiese. Improved approximation algorithms for 2-dimensional knapsack: Packing into mul-tiple l-shapes, spirals, and more. In
Symposium on Computational Geometry (SoCG) , pageTo appear, 2021.[12] Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese. Parameterized ApproximationSchemes for Independent Set of Rectangles and Geometric Knapsack. In
ESA , volume 144,pages 53:1–53:16, 2019.[13] Sandy Heydrich and Andreas Wiese. Faster approximation schemes for the two-dimensionalknapsack problem. In
SODA , pages 79–98, 2017.[14] Oscar H. Ibarra and Chul E. Kim. Fast approximation algorithms for the knapsack andsum of subset problems.
J. ACM , 22(4):463–468, October 1975.[15] Hans Kellerer, Ulrich Pferschy, and David Pisinger.
Knapsack problems . Springer, 2004.[16] Arindam Khan.
Approximation algorithms for multidimensional bin packing . PhD thesis,Georgia Institute of Technology, Atlanta, GA, USA, 2016.[17] Arindam Khan, Arnab Maiti, Amatya Sharma, and Andreas Wiese. On guillotine sep-arable packings for the two-dimensional geometric knapsack problem. In
Symposium onComputational Geometry (SoCG) , page To appear, 2021.[18] Arindam Khan and Madhusudhan Reddy Pittu. On guillotine separability of squares andrectangles. In
APPROX/RANDOM , volume 176, pages 47:1–47:22, 2020.1819] Arindam Khan, Eklavya Sharma, and K. V. N. Sreenivas. Geometry meets vectors: Ap-proximation algorithms for multidimensional packing. In
Manuscript , 2021.[20] Ariel Kulik and Hadas Shachnai. There is no eptas for two-dimensional knapsack.
Infor-mation Processing Letters , 110(16):707–710, 2010.[21] Eugene L. Lawler. Fast approximation algorithms for knapsack problems. In
FOCS , pages206–213, 1977.[22] Tobias M¨omke and Andreas Wiese. Breaking the barrier of 2 for the storage allocationproblem. In
ICALP , volume 168 of
LIPIcs , pages 86:1–86:19, 2020.19
Appendix
A.1 Next Fit Decreasing Heights (NFDH)
NFDH, introduced in [6], is a widely used algorithm to pack rectangles in a bigger rectangularbin. It works as follows. First, we order the rectangles in the decreasing order of their heights.We then place the rectangles greedily on the base of the bin until we do not cross the boundariesof the bin. At this point, we “close” the shelf and shift the base to top edge of the first rectangleand continue the process. For an illustration of the algorithm, see Fig. 3.Figure 3: The NFDH Algorithm. After packing the first two items on the base of the bin, thethird item can’t be packed on the same level. Hence, we close the shelf and create a new shelf.Now, we state a well-known and important lemma about NFDH.
Lemma 14. [2]. Let S be a set of rectangles and let w and h denote the largest width and largestheight in S , respectively. Consider a bin of dimensions W × H . If a ( S ) ≤ ( W − w )( H − h ) ,then NFDH packs all the rectangles into the bin. The above lemma suggests that NFDH works very well if all the rectangles are small comparedto the biun dimensions. The following lemma will be crucial for our purposes.
Lemma 15.
Consider a set of rectangles S with associated profits and a bin of dimensions W × H and assume that that each item in S has width at most εW and height at most εH .Suppose opt denotes the optimal profit that can be packed in the bin. Then there exists apolynomial time algorithm that packs at least (1 − ε ) opt profit into the bin.Proof. Let us first order the rectangles in the decreasing order of their profit/area ratio. Thenpick the largest prefix of rectangles T such that a ( T ) ≤ (1 − ε ) W H . By the above lemma,NFDH must be able to pack all the items in T into the bin. On the other hand, since eachrectangle has area at most ε W H , it follows that a ( T ) ≥ (1 − ε ) W H . Furthermore, since T contains the highest profit density items, it follows that p ( T ) ≥ (1 − εε