Approximation Algorithms for The Generalized Incremental Knapsack Problem
aa r X i v : . [ c s . D S ] S e p Approximation Algorithms forThe Generalized Incremental Knapsack Problem
Yuri Faenza ∗ Danny Segev † Lingyi Zhang ∗ Abstract
We introduce and study a discrete multi-period extension of the classical knapsack prob-lem, dubbed generalized incremental knapsack . In this setting, we are given a set of n items,each associated with a non-negative weight, and T time periods with non-decreasing capaci-ties W ≤ · · · ≤ W T . When item i is inserted at time t , we gain a profit of p it ; however, thisitem remains in the knapsack for all subsequent periods. The goal is to decide if and whento insert each item, subject to the time-dependent capacity constraints, with the objectiveof maximizing our total profit. Interestingly, this setting subsumes as special cases a numberof recently-studied incremental knapsack problems, all known to be strongly NP-hard.Our first contribution comes in the form of a polynomial-time ( − ǫ )-approximation forthe generalized incremental knapsack problem. This result is based on a reformulation as asingle-machine sequencing problem, which is addressed by blending dynamic programmingtechniques and the classical Shmoys-Tardos algorithm for the generalized assignment prob-lem. Combined with further enumeration-based self-reinforcing ideas and newly-revealedstructural properties of nearly-optimal solutions, we turn our basic algorithm into a quasi-polynomial time approximation scheme (QPTAS). Hence, under widely believed complexityassumptions, this finding rules out the possibility that generalized incremental knapsack isAPX-hard. Keywords : Incremental Optimization; Approximation Algorithms; Sequencing; QPTAS. ∗ Department of Industrial Engineering and Operations Research, Columbia University, 500 W. 120th Street,New York, NY 10027. Email: { yf2414,lz2573 } @columbia.edu. † Department of Statistics and Operations Research, School of Mathematical Sciences, Tel Aviv University,Tel Aviv 69978, Israel. Email: [email protected]. ontents − ǫ )-Approximation 4 A Additional Proofs from Section 2 34B Additional Proofs from Section 3 38C Additional Proofs from Section 4 40
Introduction
In many scenarios, classical optimization models are too simplistic to faithfully capture ap-plications arising in real-life environments. Much research has therefore been devoted to ex-tend fundamental well-studied models to more realistic, yet still algorithmically tractable set-tings. A very common extension along these lines introduces time-dependent components,adding a computationally-challenging layer on top of the inherent complexity of the underlyingproblem. For instance, maximum flow over time , originally introduced in the seminal workof Ford and Fulkerson (1956), has recently received a great deal of attention (Skutella, 2009;Groß et al., 2012; Lin and Jaillet, 2015). Additional examples for such settings include time-expanded versions of various packing problems (Caprara, 2002; Adjiashvili et al., 2014; Epstein,2019), network scheduling over time (Boland et al., 2014; Akrida et al., 2019), adaptive routingover time (Graf et al., 2020; Ismaili, 2017), and facility location over time (Farahani et al., 2009;Nickel and Saldanha-da Gama, 2019), just to mention a few.
Incremental knapsack problems.
In this paper, we investigate a multi-period extension ofthe classical knapsack problem. To provide initial intuition for the inner-workings of our model,consider the problem faced by urban planners, who intend to build infrastructural facilities overthe course of several years, under budget constraints. Once an infrastructure has been built,its construction cost cannot be recovered. With a quantification of each infrastructure’s annualcontribution to the well-being of the community once it is in place, the goal is to maximize thetotal benefit over the course of the planning horizon (hence, the mayor’s chances of being re-elected). A host of additional applications, such as planning the incremental growth of highwaysand networks, community development, and memory allocation can be found within several ofthe undermentioned papers and the references therein.Computational questions of this nature can be modeled via multi-period knapsack exten-sions, collectively dubbed as incremental knapsack problems. In such settings, the input ingre-dients consist in a set of n items with strictly positive weights { w i } i ∈ [ n ] , a collection of T timeperiods with non-decreasing capacities W ≤ · · · ≤ W T , and a set of item-period profits, onwhich we further elaborate below. We say that a sequence of item sets S = ( S , . . . , S T ) is a chain when S ⊆ · · · ⊆ S T ⊆ [ n ]; here, S t represents the subset of items inserted into the knap-sack up to and including time period t . As such, the chain S is feasible when w ( S t ) ≤ W t forevery t ∈ [ T ]. Our fundamental assumption is that, for each item i ∈ [ n ] and time t ∈ [ T ], we aregiven a non-negative parameter p it , corresponding to the profit we obtain when item i is insertedat time t (i.e., when i ∈ S t \ S t − , with the convention that S = ∅ ). Hence, the cumulative profitof any chain S = ( S , . . . , S T ) over all time periods is captured by Φ( S ) = P t ∈ [ T ] P i ∈ S t \ S t − p it .We refer to the resulting formulation as the generalized incremental knapsack problem. Directly-related settings.
To our knowledge, due to its double-dependency on both theitem and time period in question, the above-mentioned profit structure makes generalized in-cremental knapsack the most inclusive incremental knapsack problem studied so far. Probablythe simplest such problem is time-invariant incremental knapsack , where each item i is as-sumed to contribute a profit of φ i to each period starting at its insertion time, corresponding1o product-form profits, p it = ( T + 1 − t ) · φ i . Surprisingly, unlike the basic knapsack prob-lem, Bienstock et al. (2013) showed that this extension is strongly NP-hard. On the positiveside, Faenza and Malinovic (2018) proposed a polynomial-time approximation scheme (PTAS)based on rounding fractional solutions to an appropriate disjunctive relaxation. In the broader incremental knapsack problem, we have p it = φ i · P Tτ = t ∆ τ , where ∆ τ ≥ i gains a profit of φ iτ for each period τ , starting at its insertion time, we can set p it = P Tτ = t φ iτ . If, moreover, the per-period profits φ iτ are discounted by a factor of c τ − t after τ − t time units have elapsed since the insertion ofitem i , we set p it = P Tτ = t c τ − t φ iτ . More broadly, the generalized incremental knapsack problemallows the profits p it to be completely unrelated, and in particular, to possibly be non-monotonein t . To our knowledge, prior to the present paper, this problem was not known to admit anynon-trivial approximation guarantees. Moreover, we are not aware of any way to leverageexisting techniques in the above-mentioned papers for dealing with the broad generality of ourprofit structure. Our first contribution comes in the form of a polynomial-time constant-factor approximation for the generalized incremental knapsack problem, whosespecifics are provided in Section 2.
Theorem 1.1.
For any fixed ǫ ∈ (0 , ) , the generalized incremental knapsack problem can beapproximated in polynomial time within factor − ǫ . The starting point of our algorithm is a reformulation of generalized incremental knapsackas a single-machine sequencing problem, where feasible chains are mapped to item permutations π , with π ( i ) < π ( i ) implying that the insertion time of item i occurs no later than that of i ,potentially leaving item i out of the knapsack. Based on this reformulation, we partition anygiven permutation into “heavy” and “light” chains of items, depending on how their weightscompare to the combined weight of previously-inserted items. Guided by this decomposition,our approach consists of devising two approximation schemes, one competing against the best-possible profit due to heavy contributions and the other against the analogous quantity due tolight contributions. Technically speaking, for heavy chains, we make use of dynamic program-ming ideas, whereas for light chains, we further reformulate this setting as a highly-structuredinstance of the generalized assignment problem, which is solved to super-optimality via theShmoys-Tardos algorithm (1993), and truncated to a feasible near-optimal solution. Quasi-PTAS.
As previously mentioned, special cases of the generalized incremental knapsackproblem are known to be strongly NP-hard, admitting a PTAS under specific profit-structure2ssumptions. A natural question is whether one can design efficient algorithms with the samedegree of accuracy for generalized incremental knapsack, without any such assumption. Towardsthis goal, our second main contribution establishes the following result.
Theorem 1.2.
The generalized incremental knapsack problem admits a quasi-polynomial timeapproximation scheme.
Hence, under widely believed complexity assumptions, this finding rules out the possibilitythat generalized incremental knapsack is APX-hard, thus making it substantially different fromother knapsack extensions, such as the generalized assignment problem (see brief discussion inSection 1.2). The main idea behind the above-mentioned algorithm lies in a “self-improving”procedure, which combines certain ingredients of our constant-factor approach along with fur-ther guessing methods and structural modifications to convert a black-box α -approximationinto a − α -approximation, as accounted for in Section 3. When iteratively applied, these im-provements lead to a (1 − ǫ )-approximation, albeit with a running time exponential in 1 /ǫ ,log n , and log( w max /w min ). In essence, the last term emerges from a dual manipulation of bothchain-related and permutation-related representations, where the dependency on the extremalweight ratio appears to be inevitable. To bypass this obstacle, in Section 4 we employ our algo-rithm as a subroutine on a sequence of weakly-dependent subinstances, each with a polynomial w max /w min -value, obtained through a structural analysis of nearly-optimal solutions. The re-sulting approach is shown to be implementable in quasi-polynomial time for any given instanceof the generalized incremental knapsack problem, thus proving Theorem 1.2. In the maximum generalized assignment problem, we are given n items and m capacitatedbuckets. Assigning an item j to a bucket i takes w ij capacity units while generating a profit of p ij .The goal is to compute a feasible item-to-bucket assignment whose overall profit is maximized.For the minimization variant of this problem, Shmoys and Tardos (1993) proposed an LP-based2-approximation, which was observed by Chekuri and Khanna (2005) to be easily adaptable toobtain a 1 / − /e + δ )-approximation, for some absolute constant δ >
0, which iscurrently the best known performance guarantee for maximum generalized assignment. Earlierconstant-factor approximations were obtained by Fleischer et al. (2011), Nutov et al. (2006),and Cohen et al. (2006).In the unsplittable flow on a path problem, we are given an edge-capacitated path as wellas a collection of tasks. Each task is characterized by its own subpath, profit, and demand.The goal is to select a subset of tasks of maximum total profit, under the constraint that theoverall demand of the selected tasks along each edge resides within its capacity. The currentlybest polynomial-time approximation in this context is 5 / ǫ , for any fixed ǫ >
0, due toGrandoni et al. (2018), who improved on earlier constant-factor guarantees by Bonsma et al.(2014), Anagnostopoulos et al. (2018), and Calinescu et al. (2011). In parallel, unsplittable flowon a path admits a quasi-PTAS, as shown by Bansal et al. (2006) and by Batra et al. (2015).3rom a technical perspective, the methods involved are very different from those exploited inour paper, and it is unclear whether algorithmic ideas in one setting are migratable to theother. − ǫ )-Approximation In this section, we present our first approximability result for the generalized incrementalknapsack problem, showing that the optimal profit can be efficiently approached within a factorarbitrarily close to . The specifics of this finding, along with its corresponding running time,are formally stated in the next theorem. Theorem 2.1.
For any accuracy level ǫ ∈ (0 , ) , the generalized incremental knapsack problemcan be approximated within factor − ǫ . The running time of our algorithm is O ( n O (1 /ǫ ) ·|I| O (1) ) , where |I| = Θ( n log k w k ∞ + nT log k p k ∞ + T log k W k ∞ ) stands for the input size. Outline.
For simplicity of presentation, we start off Section 2.1 by proposing an equivalentformulation of the generalized incremental knapsack problem as a single-machine sequencingproblem. Given this reformulation, we explain in Section 2.2 how the profit function can bedecomposed into “heavy” and “light” item contributions. Somewhat informally, with respectto an unknown optimal sequencing solution, the marginal contribution of each item to the over-all profit will be classified as being either heavy or light, depending on the item’s weight andposition on the timeline. Guided by this decomposition, our approach consists of devising twoapproximation schemes, one competing against the best-possible profit due to heavy contri-butions (Section 2.3) and the other against the analogous quantity due to light contributions(Section 2.4). The best of these algorithms will be shown to provide an approximation guaran-tee of − ǫ , thereby deriving Theorem 2.1. It is worth pointing out that the techniques involvedin competing against light contributions will be further utilized in Sections 3 and 4 to obtainan approximation scheme for general instances, albeit in quasi-polynomial time. In what follows, we present an equivalent sequencing reformulation for the generalized incre-mental knapsack problem. As explained in subsequent sections, the interchangeability betweenthese formulations allows us to describe our algorithmic ideas and to analyze their performanceguarantees with greater ease. For this purpose, we proceed by arguing that the generalizedincremental knapsack problem can be rephrased as a sequencing problem on a single machineas follows: • Let π : [ n ] → [ n ] be a permutation of the underlying items, where π ( i ) stands for theposition of item i . • By viewing the weight of each item as its processing time, we define the completion time ofitem i with respect to π as C π ( i ) = P j ∈ [ n ]: π ( j ) ≤ π ( i ) w j . Accordingly, the profit ϕ π ( i ) of thisitem is given by the largest profit we can gain by inserting i at a time period whose capacity4ccurs no earlier than C π ( i ), namely, ϕ π ( i ) = max { p i,t : t ∈ [ T + 1] and W t ≥ C π ( i ) } ,with the convention that W T +1 = ∞ and p i,T +1 = 0 for every item i . • The overall profit of the permutation π is specified by Ψ( π ) = P i ∈ [ n ] ϕ π ( i ). Our objectiveis to compute a permutation whose profit is maximized.The next lemma captures the equivalence between the item-introducing perspective of thegeneralized incremental knapsack problem and the sequencing perspective described above. Lemma 2.2.
Any feasible chain S can be mapped to a permutation π S with Ψ( π S ) ≥ Φ( S ) .Conversely, any permutation π of a subset of the items can be mapped to a feasible chain S π with Φ( S π ) = Ψ( π ) . Proof.
First, given a feasible chain S , we construct the permutation π S as follows: • For each t ∈ [ T ], let π t be an arbitrary permutation of the items introduced in this period, S t \ S t − . In addition, let π T +1 be an arbitrary permutation of the remaining items, i.e.,those in [ n ] \ S T . • The permutation π S is defined as the concatenation of π , . . . , π T +1 in this order. Namely,for i ∈ S t \ S t − with t ∈ [ T ], we have π S ( i ) = π t ( i ) + | S t − | , whereas for i ∈ [ n ] \ S T , wehave π S ( i ) = π T +1 ( i ) + | S T | .To prove that Ψ( π S ) ≥ Φ( S ), it suffices to argue that ϕ π S ( i ) ≥ p i,t i for every item i ∈ S T , where t i stands for the insertion time of item i with respect to the chain S . To derive this relation,note that C π S ( i ) ≤ w ( S t i ) ≤ W t i for any such item, where the last inequality follows from thefeasibility of S . Therefore, ϕ π S ( i ) = max { p i,t : t ∈ [ T + 1] and W t ≥ C π S ( i ) } ≥ p i,t i .Conversely, given a permutation π of any subset of items, we construct a chain S π thatincludes all items with a completion time of at most W T . Specifically, the insertion time t i ofeach such item i will be the time period that maximizes p i,t i over the set { t ∈ [ T ] : W t ≥ C π ( i ) } .As such, the chain S π is indeed feasible, since w ( S t ) ≤ P i ∈ [ n ]: C π ( i ) ≤ W t w i ≤ W t for every t ∈ [ T ].To show that Φ( S π ) = Ψ( π ), it remains to explain why p i,t i = ϕ π ( i ) for inserted items and why ϕ π ( i ) = 0 for non-inserted ones. To this end, note that our choice for the insertion time t i follows the definition of ϕ π ( i ) to the letter, meaning that p i,t i = ϕ π ( i ). On the other hand, forany item i we do not insert to S π , one has ϕ π ( i ) = 0, since C π ( i ) > W T . In what follows, we focus our attention on the sequencing formulation and present a decomposi-tion of the profit function Ψ into “heavy” and “light” contributions, collected over geometrically-increasing intervals. With the necessary definitions in place, we outline how a decomposition ofthis nature guides us in proposing two approximation schemes, to separately compete againstheavy and light contributions. The main result of this section, as stated in Theorem 2.1, willeventually be derived by taking the more profitable of these approaches.For simplicity of presentation, we assume without loss of generality that ǫ ∈ (0 , ), andmoreover, that ǫ is an integer. In addition, we assume that w min = min i ∈ [ n ] w i = 3; the latterproperty can easily be enforced through scaling all item weights w i and time period capacities W t by a factor of w min . 5 rofit decomposition. We begin by geometrically partitioning the interval [0 , P i ∈ [ n ] w i ]by powers of 1 + ǫ into a collection of intervals I , . . . , I K , where K = ⌈ log ǫ ( P i ∈ [ n ] w i ) ⌉ .Specifically, I = [0 ,
1] and I k = ((1 + ǫ ) k − , (1 + ǫ ) k ] for k ∈ [ K ]. With this definition, the profitΨ( π ) = P i ∈ [ n ] ϕ π ( i ) of any permutation π can be expressed by summing item contributionsaccording to the interval in which their completion times fall, i.e.,Ψ( π ) = X k ∈ [ K ] X i ∈ [ n ]: Cπ ( i ) ∈I k ϕ π ( i ) . We say that item i is k -heavy when w i ≥ ǫ · (1 + ǫ ) k ; otherwise, this item is k -light. Wedenote the sets of k -heavy and k -light items by H k and L k , respectively, noting that H ⊇ H ⊇· · · ⊇ H K and that L k = [ n ] \ H k for every k . As a side note, one can easily verify that all itemsare 0-heavy (i.e., H = [ n ]), by recalling that w min = 3 and ǫ < . Consequently, the profitΨ( π ) can be refined by separating k -heavy and k -light items, namely,Ψ( π ) = X k ∈ [ K ] X i ∈ Hk : Cπ ( i ) ∈I k ϕ π ( i ) | {z } Ψ heavy ( π ) + X k ∈ [ K ] X i ∈ Lk : Cπ ( i ) ∈I k ϕ π ( i ) | {z } Ψ light ( π ) . (1)As shown above, we designate the first and second terms in the above expression by Ψ heavy ( π )and Ψ light ( π ), respectively. Overview.
Let π ∗ be an optimal permutation, with Ψ( π ∗ ) = Ψ heavy ( π ∗ ) + Ψ light ( π ∗ ). Theremainder of this section is dedicated to presenting two approximation schemes that wouldseparately compete against Ψ heavy ( π ∗ ) and Ψ light ( π ∗ ): • Heavy contributions : Section 2.3 explains how dynamic programming ideas allow us toefficiently compute a permutation π heavy : [ n ] → [ n ] satisfying Ψ( π heavy ) ≥ (1 − ǫ ) · Ψ heavy ( π ∗ ). The required running time will be O ( n O (1 /ǫ ) · |I| ). • Light contributions : Section 2.4 argues that the generalized assignment algorithm ofShmoys and Tardos (1993) can be leveraged to compute a permutation π light : [ n ] → [ n ]satisfying Ψ( π light ) ≥ (1 − ǫ ) · Ψ light ( π ∗ ). This algorithm can be implemented in O (( |I| ǫ ) O (1) )time.Consequently, to establish the approximation guarantee stated in Theorem 2.1, we pick themore profitable permutation out of π heavy and π light , to obtain a profit ofmax { Ψ ( π heavy ) , Ψ ( π light ) } ≥ · (Ψ ( π heavy ) + Ψ ( π light )) ≥ − ǫ · (Ψ heavy ( π ∗ ) + Ψ light ( π ∗ ))= 1 − ǫ · Ψ ( π ∗ ) . .3 Algorithm for heavy contributions In what follows, we present an O ( n O (1 /ǫ ) · |I| )-time dynamic programming approach for com-puting a permutation π heavy with a profit of Ψ( π heavy ) ≥ (1 − ǫ ) · Ψ heavy ( π ∗ ). The intuition behind our algorithm begins with the observation that, in order to compete againstΨ heavy ( π ∗ ), we can safely eliminate items that are classified as light with respect to the intervalin which their completion time falls. While the remaining items will be shifted back in theresidual permutation, potentially being completed in a lower-index interval, each of them willstill be heavy. To formalize these notions, for a subset of items S ⊆ [ n ] and a permutation π : S → [ | S | ], we say that the pair ( S, π ) is bulky if, for every k ∈ [ K ] , all items with acompletion time in I k are k -heavy, i.e., { i ∈ S : C π ( i ) ∈ I k } ⊆ H k . The next claim shows thatbulky pairs can attain a total profit of at least Ψ heavy ( π ∗ ). Lemma 2.3.
There exist a subset of items S ⊆ [ n ] and a permutation π : S → [ | S | ] such that ( S, π ) is bulky and P i ∈ S ϕ π ( i ) ≥ Ψ heavy ( π ∗ ) . Proof.
With respect to the optimal permutation π ∗ , we define a new permutation π by elim-inating, for every k ∈ [ K ] , all items i ∈ L k with C π ∗ ( i ) ∈ I k . The subset S will consistof the remaining items. To see why ( S, π ) is bulky, note that C π ( i ) ≤ C π ∗ ( i ) for any i ∈ S ,meaning that each such item is still heavy with respect to the interval that contains C π ( i ), since H ⊇ · · · ⊇ H K . In terms of profit, the latter observation implies that, for every item i ∈ S , ϕ π ( i ) = max { p i,t : t ∈ [ T + 1] and W t ≥ C π ( i ) }≥ max { p i,t : t ∈ [ T + 1] and W t ≥ C π ∗ ( i ) } = ϕ π ∗ ( i ) . Summing the above inequality over all items in S , we have P i ∈ S ϕ π ( i ) ≥ P i ∈ S ϕ π ∗ ( i ) =Ψ heavy ( π ∗ ), where the latter equality holds since every eliminated item does not contributetoward Ψ heavy ( π ∗ ) but rather toward Ψ light ( π ∗ ). Additional notation.
For a bulky pair (
S, π ), we define its top index as top(
S, π ) = max { k ∈ [ K ] : { C π ( i ) : i ∈ S } ∩ I k = ∅} , that is, the largest index of an interval that contains at leastone completion time. In addition, we define core( S ) as the set of min { ǫ , | S |} heaviest itemsin S , breaking ties by adding to core( S ) small-index items before large-index ones. Finally, themakespan of ( S, π ) corresponds to the maximum completion time of an item in S with respectto the permutation π ; in our case, this measure identifies with w ( S ). The technical crux in restricting attention to bulky pairs will be exhibited through our dynamicprogramming formulation. As formally explained below, by focusing on the dual objective ofmakespan minimization, we prove the existence of a well-hidden optimal substructure withinthe sequencing problem. 7 tates.
Each state ( k, ψ k , Q k ) of our dynamic program consists of the following parameters,whose precise role will be clarified once their corresponding value function is presented: • The index of the current interval k , taking values in [ K ] . • The total profit ψ k collected thus far, due to items whose completion time falls in I , . . . I k .For the time being, ψ k will be treated as a continuous parameter, taking values in[0 , P i ∈ [ n ] max t ∈ [ T ] p i,t ]. • The core Q k of the set of items whose completion time falls in I , . . . I k . By definition ofcore( · ), this parameter is restricted to item sets of cardinality at most ǫ .It is important to emphasize that, since ψ k is a continuous parameter, the dynamic programmingformulation below is still not algorithmic in nature, and should be viewed as a characterizationof optimal solutions. In Section 2.3.3, we explain how to discretize the parameter ψ k to takepolynomially-many values while incurring only an ǫ -loss in profit. Value function.
The value function F ( k, ψ k , Q k ) represents the minimum makespan w ( S )that can be attained over all bulky pairs ( S, π ) that satisfy the following conditions:1.
Top index : top(
S, π ) ≤ k .2. Total profit : Ψ( π ) ≥ ψ k .3. Core : core( S ) = Q k .For ease of presentation, we denote the collection of bulky pairs that meet conditions 1-3 by Bulky( k, ψ k , Q k ). When the latter set is empty, we define F ( k, ψ k , Q k ) = ∞ . Withthese definitions, Lemma 2.3 proves in retrospect the existence of a bulky pair ( S, π ) ∈ Bulky( K, Ψ heavy ( π ∗ ) , core( S )) with F ( K, Ψ heavy ( π ∗ ) , core( S )) < ∞ . Therefore, had we beenable to compute the maximal value ψ ∗ that satisfies F ( K, ψ ∗ , Q K ) < ∞ for some core Q K of at most ǫ items, its corresponding bulky pair would have guaranteed a profit of at least ψ ∗ ≥ Ψ heavy ( π ∗ ). Optimal substructure.
To this end, we proceed by unveiling the optimal substructure thatallows us to compute the value function F by means of dynamic programming. In order to gainample intuition, suppose that ( S, π ) is a bulky pair that attains F ( k, ψ k , Q k ). Then, we arguethat, by eliminating from S the set items Q whose completion time falls within the interval I k , one obtains a bulky pair that attains F ( k − , ψ k − , Q k − ), where the residual profit ψ k − is obtained by removing from ψ k the contribution of items in Q and Q k − is an appropriatelychosen core.Formally, suppose that Bulky( k, ψ k , Q k ) = ∅ , and let ( S, π ) be a bulky pair that minimizes w ( S ) over this set. Let Q = { i ∈ S : C π ( i ) ∈ I k } be the set of items in S whose completiontime with respect to π falls in the interval I k . Note that since top( S, π ) ≤ k , completion timescannot fall in I k +1 , . . . , I K . We first argue that | Q | ≤ ǫ . To verify this claim, note that since( S, π ) is bulky, Q ⊆ H k . As a result, every item in Q has a weight of at least ǫ · (1 + ǫ ) k , while I k = ((1 + ǫ ) k − , (1 + ǫ ) k ], meaning that we necessarily have | Q | ≤ (1+ ǫ ) k − (1+ ǫ ) k − ǫ · (1+ ǫ ) k ≤ ǫ .8ow, let us define the pair ( ˆ S, ˆ π ), where ˆ S = S \ Q and ˆ π : ˆ S → [ | ˆ S | ] is the permutationwhere items in ˆ S follow their relative order in π , that is, for any pair of items i and i , wehave ˆ π ( i ) < ˆ π ( i ) if and only if π ( i ) < π ( i ). In addition, let ψ k − = [ ψ k − P i ∈ Q ϕ π ( i )] + and Q k − = core( ˆ S ), where [ x ] + = max { x, } . These definitions directly ensure that ( ˆ S, ˆ π ) ∈ Bulky( k − , ψ k − , Q k − ). Moreover, as we show next, ( ˆ S, ˆ π ) forms an optimal solution withrespect to the latter state. Lemma 2.4. w ( ˆ S ) = F ( k − , ψ k − , Q k − ) . Proof.
Suppose there exists some bulky pair ( ˜ S, ˜ π ) ∈ Bulky( k − , ψ k − , Q k − ) with w ( ˜ S ) Claim 2.5. ( ˜ S + , ˜ π + ) ∈ Bulky( k, ψ k , Q k ) . We have just arrived at a contradiction to the fact that ( S, π ) minimizes w ( S ) over the setBulky( k, ψ k , Q k ), by observing that w ( ˜ S + ) = w ( ˜ S ) + w ( Q ) < w ( ˆ S ) + w ( Q ) = w ( S ). Recursive equations. In light of this structural characterization, to obtain a recursive equa-tion for F ( k, ψ k , Q k ), it suffices to “guess” the collection of items Q , their internal permutation π Q , the residual profit requirement ψ k − , and the resulting core Q k − . Formally, F ( k, ψ k , Q k )is given by minimizing F ( k − , ψ k − , Q k − ) + w ( Q ) over all choices of Q , π Q , ψ k − , and Q k − that simultaneously satisfy the following conditions:1. Top index: F ( k − , ψ k − , Q k − ) + w ( Q ) ≤ (1 + ǫ ) k . This constraint ensures that, withthe addition of Q , all items can still be packed within I , . . . , I k .2. Total profit: ψ k − ≥ [ ψ k − P i ∈ Q ϕ π Q ( i )] + , where the term ϕ π Q ( i ) denotes the profit ofitem i with respect to the permutation π Q , when its completion time is increased by F ( k − , ψ k − , Q k − ). This constraint guarantees that, by appending π Q , we obtain atotal profit of at least ψ k .3. Core: Q k − ∩ Q = ∅ , core( Q k − ∪ Q ) = Q k , Q ⊆ H k , and | Q | ≤ ǫ . These constraintsensure a correct core update as a result of adding the item set Q , where the latter setconsists of at most ǫ items, each restricted to being k -heavy. To better understand therequirement core( Q k − ∪ Q ) = Q k , note that the core resulting from the addition of Q can9e computed without a complete knowledge of all previously packed items, as all thoseoutside the current core Q k − are irrelevant for this purpose (i.e., too light to be one ofthe ǫ heaviest). As previously mentioned, due to the continuity of the profit requirement ψ k , it remains topropose an appropriate discretization of this parameter, so that we obtain a polynomially-sizedstate space with only negligible loss in profit. The discrete program ˜ F . To this end, we alter the underlying state space of our dynamicprogram, by restricting the continuous parameter ψ k to a finite set of values, D ψ = { d · ǫp max n : d ∈ [ n ǫ ] } . Here, p max is the maximum profit attainable by any single item, i.e., p max =max { p it : i ∈ [ n ] , t ∈ [ T ] , and w i ≤ W t } . We make use of ˜ F ( k, ψ k , Q k ) to designate the valuefunction F restricted to the resulting set of states, and similarly, ^ Bulky( k, ψ k , Q k ) will standfor the collection of bulky pairs that meet conditions 1-3. As a side note, beyond the additionalrestriction on ψ k , both ˜ F and ^ Bulky are defined identically to F and Bulky. Analysis. We remind the reader that, in Section 2.3.2, the quantity ψ ∗ was defined as themaximal value satisfying F ( K, ψ ∗ , Q K ) < ∞ for some core Q K of at most ǫ items, notingthat its corresponding bulky pair guarantees a profit of at least ψ ∗ ≥ Ψ heavy ( π ∗ ). In order toestablish a parallel claim with respect to the discretized program ˜ F , we prove in Lemma 2.6 alower bound of (1 − ǫ ) · ψ ∗ on the analogous quantity ˜ ψ that satisfies ˜ F ( K, ˜ ψ, Q K ) < ∞ ; theproof is provided in Appendix A.2. It follows that our dynamic program computes a bulky pair( S, π ) in which the permutation π has a profit of Ψ( π ) ≥ ˜ ψ ≥ (1 − ǫ ) · ψ ∗ ≥ (1 − ǫ ) · Ψ heavy ( π ∗ ). Lemma 2.6. There exists a value ˜ ψ ∈ D ψ such that ˜ ψ ≥ (1 − ǫ ) · ψ ∗ and such that ˜ F ( K, ˜ ψ, Q K ) < ∞ for some core Q K . Running time. We first observe that the function ˜ F ( k, ψ k , Q k ) needs to be evaluated over O ( n O (1 /ǫ ) · |I| ) possible states. Indeed, there are O ( K ) choices for the interval index k , whereby definition, K = ⌈ log ǫ ( P i ∈ [ n ] w i ) ⌉ = O ( |I| ǫ ). As for the profit parameter ψ k , following itsrestriction to the set D ψ , we ensure that ψ k takes only |D ψ | = O ( n ǫ ) values. Finally, since thecore Q k ⊆ [ n ] consists of at most ǫ items, there are only O ( n O (1 /ǫ ) ) subsets to consider.Now, evaluating each state requires minimizing the restricted function ˜ F ( k − , ψ k − , Q k − )+ w ( Q ) over all choices of Q , π Q , ψ k − , and Q k − that simultaneously satisfy conditions 1-3 ofthe recursive equations (see Section 2.3.2). In this context, the number of joint configurationsfor these parameters is O ( n O (1 /ǫ ) ). Specifically, the profit parameter ψ k − and the core Q k − respectively take O ( n ǫ ) and O ( n O (1 /ǫ ) ) values as before. In addition, the number of choices forthe augmenting set Q is O ( n O (1 /ǫ ) ), due to being comprised of at most ǫ items, and there areonly O (( ǫ ) O (1 /ǫ ) ) permutations π Q of these items. To summarize, we incur an overall runningtime of O ( n O (1 /ǫ ) · |I| ). 10 .4 Algorithm for light contributions In this section, we construct a suitably-defined instance of the maximum generalized assignmentproblem, intended to compete against Ψ light ( π ∗ ). We show that, when applied to this highly-structured instance, the LP-based algorithm of Shmoys and Tardos (1993) can be leveraged toidentify in O (( |I| ǫ ) O (1) ) time a permutation π light with a profit of Ψ( π light ) ≥ (1 − ǫ ) · Ψ light ( π ∗ ). The general intuition behind our construction resides in viewing the intervals I , . . . , I K − as distinct buckets, to which items should be assigned subject to capacity con-straints. Clearly, this perspective lacks the extra flexibility of the sequencing formulation, whereitems may be crossing between multiple successive intervals. In addition, any item-to-bucketassignment has to be associated with a specific profit a-priori, whereas the sequencing-relatedprofits depend on the exact completion time of each item. As explained in the sequel, ourapproach bypasses the first obstacle by focusing on light items, for which greedy repacking ofrounded solutions will be argued to be near-optimal. In regard to the second obstacle, we willallow seemingly unattainable profits, showing that appropriately scaled fractional solutions canbe rounded to attain these profits up to negligible loss in optimality. The construction. Guided by this intuition, we define an instance of the maximum gener-alized assignment problem as follows: • Buckets : For every k ∈ [ K − B k . The capacity of this bucket iscapacity( B k ) = (1 + ǫ ) k − (1 + ǫ ) k − , i.e., precisely the length of the interval I k . It is worthmentioning that there are no buckets corresponding to the intervals I and I K . • Items : The set of items is still [ n ], where each item has a weight of w i . • Allowed assignments and profits : An item i can be assigned to bucket B k only when i is( k + 1)-light. For such an assignment, our profit is q ik = max { p i,t : t ∈ [ T + 1] and W t ≥ (1 + ǫ ) k } .The goal is to compute a capacity-feasible assignment whose total profit is maximized. IP formulation. Moving forward, it is instructive to represent this instance through itsstandard integer programming formulation:max X i ∈ [ n ] X k ∈ [ K − i ∈ L k +1 q ik x ik s.t. X k ∈ [ K − i ∈ L k +1 x ik ≤ ∀ i ∈ [ n ] X i ∈ L k +1 w i x ik ≤ capacity( B k ) ∀ k ∈ [ K − x ik ∈ { , } ∀ k ∈ [ K − , i ∈ L k +1 (IP)In this formulation, each decision variable x ik indicates whether item i is assigned to bucket B k . The first constraint guarantees that every item is assigned to at most one bucket, and the11econd constraint ensures that the total weight of the items assigned to each bucket fits withinits capacity. The next lemma shows that any feasible assignment can be efficiently mapped toa permutation for our sequencing formulation that collects at least as much profit; the proof isprovided in Appendix A.3. Lemma 2.7. Any feasible solution x to (IP) can be translated in O ( nK ) time to a permutation π x : [ n ] → [ n ] satisfying Ψ( π x ) ≥ P i ∈ [ n ] P k ∈ [ K − i ∈ L k +1 q ik x ik . LP-relaxation and lower bound. The linear relaxation of this integer program, (LP), isobtained by replacing the integrality constraints x ik ∈ { , } with non-negativity constraints, x ik ≥ 0. The next lemma relates the resulting fractional optimum to Ψ light ( π ∗ ). Lemma 2.8. OPT(LP) ≥ (1 − ǫ ) · Ψ light ( π ∗ ) . Proof. In order to derive the desired bound, we prove that (LP) has a feasible fractionalsolution x with an objective value of at least (1 − ǫ ) · Ψ light ( π ∗ ). To this end, let C ∗ k = { i ∈ L k : C π ∗ ( i ) ∈ I k } be the subset of k -light items whose completion time with respect to the optimalpermutation π ∗ falls in I k . With this notation, recall thatΨ light ( π ∗ ) = X k ∈ [ K ] X i ∈ C ∗ k ϕ π ∗ ( i ) = K X k =2 X i ∈ C ∗ k ϕ π ∗ ( i ) , (2)where the second equality follows by observing that completion times cannot fall in either of theintervals I and I , since their union is [0 , ǫ ] whereas w min = 3, by our initial assumptionin Section 2.2.We define a fractional solution x to (LP) by setting x i,k − = 1 − ǫ for every 2 ≤ k ≤ K and i ∈ C ∗ k ; all other variables take zero values. To verify the feasibility of this solution, note thatwe clearly have P k ∈ [ K − i ∈ L k +1 x ik ≤ i ∈ [ n ]. As for the capacity constraints,for every 2 ≤ k ≤ K , X i ∈ [ n ] w i x i,k − = (1 − ǫ ) · X i ∈ C ∗ k w i ≤ (1 − ǫ ) · (cid:16) (1 + ǫ ) k − (1 + ǫ ) k − + ǫ · (1 + ǫ ) k (cid:17) ≤ (1 − ǫ ) · (1 + 5 ǫ ) · (cid:16) (1 + ǫ ) k − − (1 + ǫ ) k − (cid:17) ≤ capacity ( B k − ) . Here, the first inequality holds since all items in C ∗ k have completion times in I k , implying thattheir total weight is upper bounded by the length (1 + ǫ ) k − (1 + ǫ ) k − of this interval plus themaximum weight of any item in C ∗ k , which is at most ǫ · (1 + ǫ ) k due to being k -light. Thesecond inequality can easily be verified to hold for every ǫ ∈ (0 , x ,to obtain OPT(LP) ≥ X i ∈ [ n ] X k ∈ [ K − i ∈ L k +1 q ik x ik 12 (1 − ǫ ) · K X k =2 X i ∈ C ∗ k q i,k − ≥ (1 − ǫ ) · K X k =2 X i ∈ C ∗ k ϕ π ∗ ( i )= (1 − ǫ ) · Ψ light ( π ∗ ) , where the last equality is precisely (2). To understand the second inequality, note that for everyitem i ∈ C ∗ k , ϕ π ∗ ( i ) = max { p i,t : t ∈ [ T + 1] and W t ≥ C π ∗ ( i ) }≤ max n p i,t : t ∈ [ T + 1] and W t ≥ (1 + ǫ ) k − o = q i,k − , where the above inequality holds since C π ∗ ( i ) ≥ (1 + ǫ ) k − . We proceed by utilizing the LP-rounding approach ofShmoys and Tardos (1993, Sec. 2), which was originally proposed for the minimum generalizedassignment problem. Specifically, given an optimal fractional solution to the linear program(LP), their algorithm computes an integral vector ˆ x that satisfies the following properties:1. Objective value : ˆ x has a super-optimal objective value, i.e., X i ∈ [ n ] X k ∈ [ K − i ∈ L k +1 q ik ˆ x ik ≥ OPT(LP) . (3)2. Item assignment : ˆ x assigns each item to at most one bucket, namely, P k ∈ [ K − i ∈ L k +1 ˆ x ik ≤ i ∈ [ n ].3. Fixable capacity : For every bucket B k , if its capacity is violated (i.e., P i ∈ L k +1 w i ˆ x ik > capacity( B k )), there exists a single infeasibility item i inf( k ) with ˆ x i inf( k ) ,k = 1 whose removalrestores the feasibility of that bucket, i.e., X i ∈ L k +1 w i ˆ x ik − w i inf( k ) ≤ capacity( B k ) . (4) Restoring feasibility with negligible profit loss. Given the above-mentioned properties,a feasible integral solution can obviously be obtained by eliminating the infeasibility item ofeach bucket with violated capacity. However, this straightforward approach may decrease theobjective value by a non- ǫ -bounded factor. Instead, the final step of our algorithm greedilydefines an integral solution ˆ x − ≤ ˆ x which is feasible for (IP) and has an objective value of atleast (1 − ǫ ) · OPT(LP). To this end, for every bucket B k whose capacity is not violated by ˆ x ,we simply have ˆ x − ik = ˆ x ik for all i ∈ L k +1 . In contrast, for every bucket B k whose capacity isviolated, we proceed as follows: 13 Let i , . . . , i M be an indexing of the set { i ∈ L k +1 : ˆ x ik = 1 } such that q i ,k w i ≥ · · · ≥ q iM ,k w iM . • Let µ be the maximal index for which P m ∈ [ µ ] w i m ≤ capacity( B k ). • Then, our solution sets ˆ x − i ,k = · · · = ˆ x − i µ ,k = 1 and ˆ x − ik = 0 for any other item. Clearly,ˆ x − ik ≤ ˆ x ik for all i ∈ L k +1 .The next claim shows that the profit collected by ˆ x − nearly matches the fractional optimum. Lemma 2.9. P i ∈ [ n ] P k ∈ [ K − i ∈ L k +1 q ik ˆ x − ik ≥ (1 − ǫ ) · OPT(LP) . Proof. Recall that the super-optimality property of ˆ x , as stated in (3), corresponds to having P i ∈ [ n ] P k ∈ [ K − i ∈ L k +1 q ik ˆ x ik ≥ OPT(LP). Therefore, by changing the order of summation, wecan establish the desired claim by proving that P i ∈ L k +1 q ik ˆ x − ik ≥ (1 − ǫ ) · P i ∈ L k +1 q ik ˆ x ik forevery k ∈ [ K − x −· k = ˆ x · k with respect to buckets whose capacity isnot violated by ˆ x , it remains to focus on violated buckets.For such buckets, we first observe that, by the maximality of µ , X m ∈ [ µ ] w i m > capacity ( B k ) − w i µ +1 ≥ (1 − ǫ ) · capacity( B k ) , (5)where the second inequality holds since i µ +1 ∈ L k +1 , and therefore w i µ +1 ≤ ǫ · (1 + ǫ ) k +1 ≤ ǫ · ((1 + ǫ ) k − (1 + ǫ ) k − ) = 4 ǫ · capacity( B k ) for ǫ ∈ (0 , X m ∈ [ M ] w i m = X i ∈ L k +1 w i ˆ x ik ≤ capacity( B k ) + w i inf( k ) ≤ (1 + 4 ǫ ) · capacity( B k ) , (6)where the equality follows from how the indices i , . . . , i M were defined, the first inequalityis precisely the fixable capacity property of ˆ x (see (4)), and the second inequality holds since w i inf( k ) ≤ ǫ · capacity( B k ), as explained earlier for w i µ +1 . Consequently, X i ∈ L k +1 q ik ˆ x − ik = X m ∈ [ µ ] q i m ,k ≥ P m ∈ [ µ ] w i m P m ∈ [ M ] w i m · X m ∈ [ M ] q i m ,k ≥ − ǫ ǫ · X m ∈ [ M ] q i m ,k ≥ (1 − ǫ ) · X i ∈ L k +1 q ik ˆ x ik , where the first inequality holds since q i ,k w i ≥ · · · ≥ q iM ,k w iM , and the second inequality is obtainedby plugging in (5) and (6). 14 erformance guarantee. We conclude by noting that, since ˆ x − is a feasible solution to (IP),Lemma 2.7 allows us to construct a permutation π light with an overall profit ofΨ( π light ) ≥ X i ∈ [ n ] X k ∈ [ K − i ∈ L k +1 q ik ˆ x − ik ≥ (1 − ǫ ) · OPT(LP) ≥ (1 − ǫ ) · Ψ light ( π ∗ ) , where the second and third inequalities follow from Lemmas 2.9 and 2.8, respectively.From a running time perspective, the computational bottleneck of our approach is theShmoys-Tardos algorithm (1993). As the latter is applied to a maximum generalized assignmentinstance consisting of n items and O ( K ) = O ( |I| ǫ ) buckets, it requires O (( |I| ǫ ) O (1) ) time in total.Beyond that, restoring the feasibility of ˆ x and translating the resulting solution ˆ x − back to apermutation can both be implemented in O (( nK ) O (1) ) time. In this section, we develop an approximation scheme for the generalized incremental knapsackproblem by embedding our LP-based approach for competing against light contributions withina self-improving algorithm. As formally stated in Theorem 3.1 below, the running time of thisalgorithm will be exponentially-dependent on log( n · w max w min ), meaning that it provides a quasi-polynomial time approximation scheme (QPTAS) when the ratio between the extremal itemweights is polynomial in the input size. In Section 4, these ideas will be exploited within anapproximate dynamic programming framework to derive a true QPTAS, without making anyassumptions on the ratio w max w min . Theorem 3.1. For any accuracy level ǫ ∈ (0 , , the generalized incremental knapsack problemcan be approximated within a factor of − ǫ in time O (( nT ) O ( ǫ · log( n · w max w min )) · |I| O (1) ) . Outline. As an instructive step, we dedicate Section 3.1 to explaining how, given any feasiblechain, one can define a residual instance on the remaining (non-inserted) items. In this context,we establish a number of structural properties that relate between the solution spaces of theoriginal and residual instances, which will be useful moving forward. As explained in Section 3.2,the basic idea behind our “self-improving” algorithm resides in arguing that, given a black-box α -approximation for the generalized incremental knapsack problem, efficient guessing methodscan be utilized to construct a solution that optimally competes against heavy contributions, andsimultaneously, α -competes against light contributions. In Section 3.3, we combine this resultwith our near-optimal algorithm for light contributions and attain a performance guarantee of − α , up to lower-order terms. Repeated applications of these α − α improvements will beshown to obtain a (1 − ǫ )-fraction of the optimal profit within O ( ǫ ) rounds. It is important tomention that each such application by itself incurs an exponential dependency on log( n · w max w min ),meaning that the results of this section are incomparable to those stated in Theorem 2.1, wherethe running time involved is truly polynomial for any fixed ǫ > .1 Residual instances and their propertiesInstance representation. Due to working with modified instances in subsequent sections,we will designate the underlying set of items in a given instance by N . As before, each item i ∈ N is associated with a weight of w i , each time period t ∈ [ T ] has a capacity of W t , and wegain a profit of p it for introducing item i in period t . That said, what differentiates betweenone instance and the other are two ingredients: The item set N and the time period capacities W = ( W , . . . , W T ) with respect to which these instances are defined. It is important to pointout that, regardless of the instance being considered, the item weights w i , the number of timeperiods T , and the item-to-period profits p it will be kept unchanged. For these reasons, wedenote a generalized incremental knapsack instance simply by I = ( N , W ). The | G -operator. In the following, we introduce additional definitions, notation, and struc-tural properties related to modified instances and their solution space. For a pair of chains, S = ( S , . . . , S T ) and G = ( G , . . . , G T ), we define the union of S and G as S ∪ G =( S ∪ G , . . . , S T ∪ G T ), which is clearly a chain itself. For a chain S and a subset of items G ⊆ N , we denote by S| G the restriction of S to G , namely, S| G = ( S ∩ G, . . . , S T ∩ G ); onecan easily verify that S| G is a chain as well. The next claim, whose straightforward proof isomitted, establishes the feasibility of S| G whenever S is feasible. Observation 3.2. Let S be a feasible chain for I . Then, for any set of items G ⊆ N , thechain S| G is feasible as well. The residual instance. Given a feasible chain G = ( G , . . . , G T ) for an instance I = ( N , W ),we define the residual generalized incremental knapsack instance I −G = ( N −G , W −G ) as follows: • The new set of items is N −G = N \ G T . Namely, we eliminate all items that wereintroduced at any point in time by G . • The residual capacity of every time t ∈ [ T ] is set to W −G t = min t ≤ τ ≤ T ( W τ − w ( G τ )). • As previously mentioned, all item weights and profits remain unchanged.To verify that the residual instance I −G is well defined, it suffices to show that the residualcapacities W −G are non-negative and non-decreasing over time. The former property holdssince w ( G t ) ≤ W t for every t ∈ [ T ], by feasibility of G . The latter property follows by observingthat W −G t = min t ≤ τ ≤ T ( W τ − w ( G τ )) ≤ min t +1 ≤ τ ≤ T ( W τ − w ( G τ )) = W −G t +1 . The next two claims, whose respective proofs appear in Appendices B.1 and B.2, explain therelationship between the solution spaces of the original instance I and its residual instance I −G .For our purposes, the main implication of this relationship will be that, whenever we are ableto “guess” a chain G = S ∗ | G , where S ∗ is an optimal chain for I , it suffices to focus on solvingthe residual instance I −G . With an appropriate guess for the set of items G , this property willbe a key idea within the approximation scheme we devise in the remainder of this section.16 emma 3.3. Let G be a feasible chain for I and let R be a feasible chain for I −G . Then, G ∪ R is a feasible chain for I with profit Φ( G ∪ R ) = Φ( G ) + Φ( R ) . Lemma 3.4. Let S be a feasible chain for I and let G = S| G , for some set of items G ⊆ N .Then, S| N \ G is a feasible chain for I −G with profit Φ( S| N \ G ) = Φ( S ) − Φ( G ) . Moreover, if S is optimal for I , then S| N \ G is optimal for I −G . Given a generalized incremental knapsack instance I = ( N , W ), let us focus our attention on afixed optimal chain S ∗ . As argued in Lemma 2.2, this chain can be mapped to a permutation π S ∗ : N → [ |N | ] whose objective value with respect to the corresponding sequencing formulationis Ψ( π S ∗ ) ≥ Φ( S ∗ ). By decomposing the overall profit Ψ( π S ∗ ) into heavy and light contributions,as prescribed by Equation (1), we have:Ψ( π S ∗ ) = X k ∈ [ K ] X i ∈ Hk : Cπ S∗ ( i ) ∈I k ϕ π S∗ ( i ) | {z } Ψ heavy ( π S∗ ) + X k ∈ [ K ] X i ∈ Lk : Cπ S∗ ( i ) ∈I k ϕ π S∗ ( i ) | {z } Ψ light ( π S∗ ) . (7)As a side note, similarly to Section 2, we assume without loss of generality that w min ≥ α H , α L ∈ [0 , A guarantees an ( α H , α L )-approximation with respect to S ∗ when it computes a feasible chain S with Φ( S ) ≥ α H · Ψ heavy ( π S ∗ )+ α L · Ψ light ( π S ∗ ). We mention in passing that this definition depends on the specificpermutation π S ∗ , and is generally different from the standard notion of an α -approximation,where the chain S is required to satisfy Φ( S ) ≥ α · Φ( S ∗ ). From α -approximation to (1 , α )-approximation. In what follows, we show how to boostthe profit performance of any approximation algorithm for the generalized incremental knapsackproblem. For every α ∈ [0 , α -approximation withfurther guesses for the positioning of heavy items with respect to the permutation π S ∗ in orderto derive a (1 , α )-approximation, incurring an extra multiplicative factor of O (( nT ) O ( ǫ log( nρ )) )in running time, where ρ = w max w min . This result can be formally stated as follows. Lemma 3.5. Suppose that the algorithm A constitutes an α -approximation for generalizedincremental knapsack, for some α ∈ [0 , . Then, there exists a (1 , α ) -approximation whoserunning time is O (( nT ) O ( ǫ log( nρ )) · Time A ( n, T )) . Here, Time A ( n, T ) designates the worst-case running time of A for instances with n items and T time periods. Preliminaries. We remind the reader that Section 2.2 has previously defined the intervals I = [0 , 1) and I k = ((1 + ǫ ) k − , (1 + ǫ ) k ] for k ∈ [ K ], where K = ⌈ log ǫ ( P i ∈ [ n ] w i ) ⌉ . In thisregard, an item i is k -heavy when w i ≥ ǫ · (1 + ǫ ) k , with the convention that H k stands for thecollection of k -heavy items. Let G ∗ heavy be the set of items that are heavy for the interval thatcontains their completion time with respect to the permutation π S ∗ , i.e., G ∗ heavy = S k ∈ [ K ] { i ∈ H k : C π S∗ ( i ) ∈ I k } . The following lemma, whose proof appears in Appendix B.3, provides anupper bound on the cardinality of this set. 17 emma 3.6. | G ∗ heavy | ≤ nρ ) ǫ . We proceed by considering the restriction of the optimal chain S ∗ to the set of items G ∗ heavy ,which will be denoted by H ∗ = S ∗ | G ∗ heavy . By Observation 3.2, H ∗ is a feasible chain for I .The next lemma, whose proof can be found in Appendix B.4, relates between the profit of thischain and heavy contributions with respect to the permutation π S ∗ . Lemma 3.7. Φ( H ∗ ) = Ψ heavy ( π S ∗ ) . The algorithm. At a high level, our algorithm relies on “knowing” the restricted chain H ∗ in advance, which will be justified by guessing all items in G ∗ heavy and their insertion timeswith respect to the optimal chain S ∗ . This procedure will be implemented by enumeratingover all possible configurations of these parameters. For each such guess, we construct theresidual generalized incremental knapsack instance, to which the α -approximation algorithm A is applied. Formally, given an instance I = ( N , W ) and an error parameter ǫ > 0, we proceedas follows:1. For every feasible chain G = ( G , . . . , G T ) with | G T | ≤ nρ ) ǫ :(a) Construct the residual instance I −G .(b) Apply the algorithm A to obtain an α -approximate feasible chain S −G =( S −G , . . . , S −G T ) for I −G .2. Return the chain G ∗ ∪ S −G ∗ of maximum profit among those considered above. Analysis: Feasibility and running time. We first observe that, for any feasible chain G constructed in step 1, since S −G is a feasible chain for I −G , the feasibility of G ∪ S −G for I follows by Lemma 3.3. In terms of running time, we are considering only chains that introduceat most nρ ) ǫ items over all time periods. Thus, the number of chains being enumeratedis O (( nT ) O ( ǫ log( nρ )) ). For each residual instance, consisting of T time periods and at most n items, we apply the algorithm A once, implying that the overall running time is indeed O (( nT ) O ( ǫ log( nρ )) · Time A ( n, T )). Analysis: (1 , α )-approximation guarantee. We conclude the proof of Lemma 3.5 by ar-guing that G ∗ ∪ S −G ∗ is a (1 , α )-approximate chain with respect to S ∗ for the original instance I . Lemma 3.8. Φ( G ∗ ∪ S −G ∗ ) ≥ Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ) . Proof. We begin by observing that the feasible chain H ∗ = S ∗ | G ∗ heavy is one of those consideredin step 1. To verify this claim, note that | G ∗ heavy | ≤ nρ ) ǫ by Lemma 3.6, meaning that H ∗ introduces at most that many items across all time periods. As a result, since the chain G ∗ ∪S −G ∗ attains a maximum profit among those considered, we have Φ( G ∗ ∪ S −G ∗ ) ≥ Φ( H ∗ ∪ S −H ∗ ), andit remains to prove that Φ( H ∗ ∪ S −H ∗ ) ≥ Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ).18or this purpose, let L ∗ = S ∗ | N \ G ∗ heavy be the restriction of S ∗ to the set N \ G ∗ heavy , whichis a feasible chain for I by Observation 3.2. We next show that Φ( L ∗ ) = Ψ light ( π S ∗ ). In orderto derive this claim, note that since L ∗ T and H ∗ T are disjoint and S ∗ = H ∗ ∪ L ∗ , it follows thatΦ( L ∗ ) = Φ( S ∗ ) − Φ( H ∗ )= Φ( S ∗ ) − Ψ heavy ( π S ∗ )= Ψ( π S ∗ ) − Ψ heavy ( π S ∗ )= Ψ light ( π S ∗ ) , where the second equality holds due to Lemma 3.7, the third equality is obtained by recallingthat Ψ( π S ∗ ) = Φ( S ∗ ), as shown along the proof of Lemma 3.7, and the last equality followsfrom the profit decomposition (7).However, the crucial observation is that L ∗ is a feasible chain for the residual instance I −H ∗ ,by Lemma 3.4. Consequently, since the algorithm A computes an α -approximate feasible chain S −H ∗ for the latter instance, Φ( S −H ∗ ) ≥ α · Φ( L ∗ ) = α · Ψ light ( π S ∗ ), implying that H ∗ ∪ S −H ∗ indeed has a profit of Φ( H ∗ ∪ S −H ∗ ) = Φ( H ∗ ) + Φ( S −H ∗ ) ≥ Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ). We proceed by revealing the self-improving feature of our approach, by showing that a (1 , α )-approximation for generalized incremental knapsack leads in turn to a − δ − α -approximation, whencombined with our algorithm for light items, presented in Section 2.4. We will then show howto recursively apply this self-improving idea to eventually derive an approximation scheme. Lemma 3.9. Suppose that, for some α ∈ [0 , , the algorithm A constitutes an α -approximation.Then, for any accuracy level δ > , the generalized incremental knapsack problem can be ap-proximated within factor − δ − α in time O (( nT ) O ( δ log( nρ )) · Time A ( n, T ) + ( |I| ǫ ) O (1) ) . Proof. As explained in Section 3.2, the optimal chain S ∗ can be mapped to a permutation π S ∗ whose overall profit Ψ( π S ∗ ) decomposes into heavy and light contributions, Ψ( π S ∗ ) =Ψ heavy ( π S ∗ ) + Ψ light ( π S ∗ ). Now, on the one hand, Lemma 3.5 provides us with a (1 , α )-approximation in O (( nT ) O ( δ log( nρ )) · Time A ( n, T )) time. That is, we obtain a feasible chain S (1 ,α ) with Φ( S (1 ,α ) ) ≥ Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ). On the other hand, the main result ofSection 2.4 allows us to compute in O (( |I| ǫ ) O (1) ) time a permutation π light with a profit ofΨ( π light ) ≥ (1 − δ ) · Ψ light ( π S ∗ ). By converting this permutation to a feasible chain S (0 , − δ ) along the lines of Lemma 2.2, we clearly obtain a (0 , − δ )-approximation, meaning thatΦ( S (0 , − δ ) ) ≥ (1 − δ ) · Ψ light ( π S ∗ ). Our combined approach independently employs both algo-rithms and returns the more profitable of the two feasible chains computed, S (1 ,α ) and S (0 , − δ ) ,to obtain a profit ofmax (cid:8) Φ( S (1 ,α ) ) , Φ( S (0 , − δ ) ) (cid:9) ≥ max { Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ) , (1 − δ ) · Ψ light ( π S ∗ ) }≥ − α · (Ψ heavy ( π S ∗ ) + α · Ψ light ( π S ∗ ))+ (cid:18) − − α (cid:19) · (1 − δ ) · Ψ light ( π S ∗ )19 − δ − α · (Ψ heavy ( π S ∗ ) + Ψ light ( π S ∗ ))= 1 − δ − α · Ψ( π S ∗ ) ≥ − δ − α · Φ( S ∗ ) , where the last inequality follows from Lemma 2.2. The final approximation scheme. We conclude by explaining how our α − δ − α improve-ment, outlined in Lemma 3.9, can be iteratively applied to derive an approximation scheme forthe generalized incremental knapsack problem, thereby sealing the proof of Theorem 3.1.For the purpose of ensuring a (1 − ǫ )-fraction of the optimal profit, we will set the errortolerance δ in Lemma 3.9 as a function of ǫ , where the exact dependency will be determined lateron. Given this self-improving result, we define a sequence of algorithms A , A , . . . , with theconvention that the approximation ratio of each such algorithm A r is denoted by α r . Specifically,this sequence begins with the trivial algorithm A that returns an empty solution ( ∅ , . . . , ∅ ),meaning that α = 0. Then, by applying Lemma 3.9 with respect to A , we obtain thealgorithm A , for which α = − δ . Subsequently, by a similar application with respect to A ,we obtain A , with α = − δ − α . In general, for every integer r ≥ 1, the resulting algorithm A r guarantees an approximation ratio of α r = − δ − α r − . The next lemma, whose proof is presentedin Appendix B.5, provides a closed-form lower bound on α r . Lemma 3.10. α r ≥ rr +1 − rδ , for every r ≥ . By choosing an error tolerance of δ = ǫ , the above lemma implies that ⌈ ǫ ⌉ self-improving rounds produce an algorithm A ⌈ ǫ ⌉ for computing a feasible chain S with a profitof Φ( S ) ≥ ( ⌈ /ǫ ⌉⌈ /ǫ ⌉ +1 − ⌈ ǫ ⌉ · ǫ ) · Φ( S ∗ ) ≥ (1 − ǫ ) · Φ( S ∗ ), thereby deriving the approximationguarantee of Theorem 3.1. Furthermore, it is not difficult to verify that algorithm A ⌈ ǫ ⌉ runs in O (( nT ) O ( ǫ · log( nρ )) · |I| O (1) ) time, by induction on r . Thus far, we have developed an approximation scheme whose running time includes an expo-nential dependency on log( n · w max w min ), leading to a quasi-PTAS for problem instances where theratio w max w min is polynomial in the input size. In what follows, we show how to obtain a truequasi-PTAS, without mitigating assumptions on w max w min . Theorem 4.1. For any accuracy level ǫ ∈ (0 , , the generalized incremental knapsack problemcan be approximated within a factor of − ǫ in time O ( |I| O (( ǫ log |I| ) O (1) ) ) . We begin by slightly altering a given instance I = ( N , W ), with the objective of creating nearly-ideal circumstances for the approximationscheme of Section 3 to operate, losing negligible profits along the way. For this purpose, given20n error parameter ǫ > 0, we say that the instance I is well-spaced when its set of items N canbe partitioned into clusters C , . . . , C M satisfying the following properties:1. Weight ratio within clusters : For every m ∈ [ M ], the weights of any two items in cluster C m differ by a multiplicative factor of at most n /ǫ .2. Weight gap between clusters : For every m , m ∈ [ M ] with m < m , the weight of anyitem in cluster C m is greater than the weight of any item in cluster C m by a multiplicativefactor of at least n m − m − /ǫ .In Section 4.2, we show that one can efficiently identify a subset of items over which theinduced instance is well-spaced, while still admitting a near-optimal solution. We derive thisresult, as formally stated below, through an application of the shifting method (see, for instance,(Hochbaum and Maass, 1985; Baker, 1994)). Lemma 4.2. There exists an item set N spaced ⊆ N for which I spaced = ( N spaced , W ) is a well-spaced instance, whose optimal chain S spaced guarantees a profit of Φ( S spaced ) ≥ (1 − ǫ ) · Φ( S ∗ ) .Such a set can be determined in O (( n/ǫ ) O (1) ) time. Step 2: Proving the sparse-crossing property. For simplicity of notation, we assumefrom this point on that the instance I = ( N , W ) is well-spaced, with clusters C , . . . , C M . Nowsuppose that the optimal permutation π ∗ for the sequencing-based formulation of this instancewas known to be “crossing-free”, namely, items belonging to cluster C appear first in π ∗ ,followed by those belonging to cluster C , so on and so forth. In other words, a left-to-right scanof the permutation π ∗ reveals that it is weakly-increasing by cluster. In this ideal situation,the approximation scheme we propose in Section 3 can be sequentially employed to the clusters C , . . . , C M in increasing order. This way, we would have obtained a (1 − ǫ )-approximation intruly quasi-polynomial time, since the extremal weight ratio within each cluster is n /ǫ -bounded,by property 1.Unfortunately, elementary examples show that an optimal permutation π ∗ may not becrossing-free, in the sense that items in any given cluster can be preceded by items belongingto higher-index clusters. That said, a suitable relaxation of these ideas can still be exploited.Formally, let us denote by cross m ( π ) the number of items in clusters C m +1 , . . . , C M that appearin the permutation π before the last item belonging to cluster C m ; note that crossing-free isequivalent to having cross ( π ) = · · · = cross M ( π ) = 0. Our next structural result, formallyestablished in Section 4.3, proves the existence of a near-optimal permutation with very fewitems crossing each cluster. Lemma 4.3. There exist an item set N sparse ⊆ N and a permutation π sparse : N sparse → [ |N sparse | ] satisfying:1. Sparse crossing: max m ∈ [ M ] cross m ( π sparse ) ≤ ⌈ log M ⌉ ǫ .2. Near-optimal profit: Ψ( π sparse ) ≥ (1 − ǫ ) · Ψ( π ∗ ) . Technically speaking, our proof is based on applying a sequence of recursive transformationswith respect to the unknown optimal permutation π ∗ . To convey the high-level idea, let i mid be21he last-appearing item in π ∗ out of clusters C , . . . , C M/ . When fewer than 1 /ǫ items in clusters C ( M/ , . . . , C M appear before i mid , each of the clusters C , . . . , C M/ has at most 1 /ǫ crossingsdue to items in C ( M/ , . . . , C M . We can therefore recursively proceed into the left part of π ∗ , stretching up to the item i mid , and into its right part, consisting of the remaining items.In the opposite case, where at least 1 /ǫ items in clusters C ( M/ , . . . , C M appear before i mid ,the important observation is that we can eliminate the cheapest out of the first 1 /ǫ such itemswhile losing only an O ( ǫ )-fraction of their combined profit. However, since this item is heavierthan any item in lower-index clusters by a factor of at least n (see property 2), the gap we havejust created is sufficiently large to pull back each and every item in clusters C , . . . , C M/ , onlyincreasing their profit contributions. We can now recursively proceed into the left and rightparts. Step 3: The external dynamic program. Given the sparse-crossing property, we dedicateSection 4.4 to proposing a dynamic programming approach for computing a near-optimal per-mutation. For this purpose, by recycling some of the notation introduced in Section 2.3, ourstate description ( m, ψ m , Q >m ) will consists of the following parameters: • The index of the current cluster, m . • The profit requirement, ψ m . • The set of items Q >m belonging to clusters C m +1 , . . . , C M that will be crossing intolower-index clusters, noting that Lemma 4.3 allows us to consider only small sets, ofsize O ( log Mǫ ).At a high level, the value function F ( m, ψ m , Q >m ) will represent the minimum makespan w ( S )that can be attained, over all subset of items S within the union of Q >m and the clusters C , . . . , C m (namely, S ⊆ Q >m ⊎ ( U µ ∈ [ m ] C µ )) and over all permutations π : S → [ | S | ] thatgenerate a total profit of at least ψ m . Clearly, the best-possible profit of a sparse-crossingpermutation corresponds to the maximal value ψ M that satisfies F ( M, ψ M , ∅ ) < ∞ , which is atleast (1 − ǫ ) · Ψ( π ∗ ), by Lemma 4.3.As formally explained in Section 4.4, within the recursive equations for computing F ( m, ψ m , Q >m ), evaluating the marginal makespan increase of each possible action involvessolving a single-cluster subproblem. Specifically for the latter, the approximation scheme wehave devised in Section 3 will be shown to incur a quasi-polynomial running time. In parallel,the dominant factor in determining the underlying number of states emerges from the set ofitems Q >m , taking O ( n O ( ǫ log M ) ) possible values, respectively, thus forming the second sourceof quasi-polynomiality in our approach and concluding the proof of Theorem 4.1. For the purpose of identifying the desired subset N spaced , we initially partitionthe overall collection of items N into buckets B , . . . , B L according to their weights. Thispartition will be geometric, by powers of n , meaning that L = ⌈ log n ( w max w min ) ⌉ + 1. Specifically,the first bucket B consists of items whose weight resides in [ w min , n · w min ), the second bucket22 consists of those with weight in [ n · w min , n · w min ), so on and so forth, where in general,bucket B ℓ corresponds to the interval [ n ℓ − · w min , n ℓ · w min ). It is easy to verify that B , . . . , B L is indeed a partition of N . Creating clusters. Now let r ∈ { , . . . , ǫ − } be an integer parameter whose value will bedetermined later. Accordingly, we create a subset of items N r ⊆ N , that will be clusteredinto C r , . . . , C rM with M = ⌊ ǫL + 2 ⌋ , as follows. Intuitively, we introduce “gaps” within thesequence of buckets B , . . . , B L , spaced apart by ǫ indices, through eliminating every bucket B ℓ with ℓ mod ǫ = r ; then, between every pair of successive gaps, buckets will be unified toform a single cluster. That is, the first cluster is defined as C r = U r − ℓ =1 B ℓ , the second cluster is C r = U r − /ǫℓ = r +1 B ℓ , the third is C r = U r − /ǫℓ = r +1+1 /ǫ B ℓ , and so on. Finally, we define the subset ofitems N r as the union of all clusters, i.e., N r = U m ∈ [ M ] C rm , with a corresponding generalizedincremental knapsack instance I r = ( N r , W ). Analysis. In what follows, we argue that for every r ∈ { , . . . , ǫ − } , the instance I r wehave just constructed is in fact well-spaced, and the partition of N r into clusters is given by C r , . . . , C rM . For this purpose, we separately prove each of the required well-spaced properties.1. Weight ratio within clusters : Consider two items i and i belonging to the same cluster C rm . Letting B ℓ and B ℓ be the buckets containing these items, respectively, their weightratio can be upper bounded by observing that w i w i ≤ max i ∈B ℓ w i min i ∈B ℓ w i ≤ n ℓ − ( ℓ − ≤ n (1 /ǫ ) − , where the second inequality holds since each bucket B ℓ contains items whose weight fallswithin [ n ℓ − · w min , n ℓ · w min ), and the third inequality follows by noting that each clusterrepresents the union of at most ǫ − ℓ − ℓ ≤ ǫ − Weight gap between clusters : Similarly, let i and i be a pair of items that belong toclusters C rm and C rm , respectively, with m < m . In this case, when we denote thecorresponding buckets by B ℓ and B ℓ , their weight ratio can be lower bounded by w i w i ≥ min i ∈B ℓ w i max i ∈B ℓ w i ≥ n ( ℓ − − ℓ ≥ n m − m − /ǫ , where the last inequality holds since ℓ ∈ { r + 1 + m − ǫ , . . . , r − m − ǫ } and ℓ ∈{ r + 1 + m − ǫ , . . . , r − m − ǫ } , by definition of C rm and C rm .We conclude the proof by showing that at least one of the well-spaced instances I , . . . I ǫ − is associated with an optimal profit of at least (1 − ǫ ) · Φ( S ∗ ). To this end, with respect to the23ptimal chain S ∗ for the original instance I , note that the restriction of this chain S ∗ | N r to theitem set N r is clearly feasible for I r , by Observation 3.2. Letting S r ∗ be an optimal chain for I r , we consequently havemax ≤ r ≤ (1 /ǫ ) − Φ( S r ∗ ) ≥ max ≤ r ≤ (1 /ǫ ) − Φ( S ∗ | N r ) ≥ ǫ · (1 /ǫ ) − X r =0 Φ( S ∗ | N r )= ǫ · (1 /ǫ ) − X r =0 X t ∈ [ T ] X i ∈ ( S ∗ t \ S ∗ t − ) ∩N r p it = ǫ · X t ∈ [ T ] X i ∈ S ∗ t \ S ∗ t − (cid:12)(cid:12)(cid:12)(cid:12)(cid:26) r ∈ (cid:26) , . . . , ǫ − (cid:27) : i ∈ (cid:0) S ∗ t \ S ∗ t − (cid:1) ∩ N r (cid:27)(cid:12)(cid:12)(cid:12)(cid:12) · p it = (1 − ǫ ) · X t ∈ [ T ] X i ∈ S ∗ t \ S ∗ t − p it = (1 − ǫ ) · Φ( S ∗ ) , where the next-to-last equality holds since every item introduced in the optimal chain S ∗ appearsin all but one of the sets N , . . . , N (1 /ǫ ) − . We begin by introducing some additional definitions and notation that willbe utilized throughout this proof. For a set of cluster indices M ⊆ [ M ], we use C M to designatethe union of M -indexed clusters, i.e., C M = U m ∈M C m . Expanding upon the definition ofcross m ( π ), given disjoint sets, M ⊆ [ M ] and M ⊆ [ M ], let cross M , M ( π ) denote the numberof items in C M that appear in the permutation π before the last item in C M , namely,cross M , M ( π ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:26) i ∈ C M : π ( i ) < max j ∈C M π ( j ) (cid:27)(cid:12)(cid:12)(cid:12)(cid:12) . When cross M , M ( π ) ≥ ǫ , we use X M , M ( π ) to designate the set comprised of the first ǫ items in M -indexed clusters in the permutation π . When cross M , M ( π ) < ǫ , we simply set X M , M ( π ) = ∅ . Fixing permutations. In order to formalize the notion of “pulling back” items withina given permutation, as briefly sketched in Section 4.1, we define a fixing procedure,FixCrossing( π, M − , M + ). Here, we receive as input a permutation π : Q → [ |Q| ] over anitem set Q ⊆ N , along with two disjoint sets of cluster indices, M − and M + , which are as-sumed to satisfy max M − < min M + , i.e., any index in M − is strictly smaller than any index in M + . As explained below, this procedure constructs in polynomial time a modified permutation¯ π : ¯ Q → [ | ¯ Q| ], over a subset ¯ Q ⊆ Q , that satisfies the following properties:1. Sparse ( M − , M + ) -crossing: cross M − , M + (¯ π ) ≤ ǫ .2. Completion times: C ¯ π ( i ) ≤ C π ( i ), for every i ∈ ¯ Q .24. Difference: Q \ ¯ Q consists of at most one item, which is a member of X M − , M + ( π ).For this purpose, when cross M − , M + ( π ) < ǫ , the procedure FixCrossing( π, M − , M + ) re-turns exactly the same permutation (i.e., ¯ π = π ), without any alterations. In the opposite case,when cross M − , M + ( π ) ≥ ǫ , let i M − , M + be the least profitable item in X M − , M + ( π ) with respectto the optimal permutation π ∗ , namely, i M − , M + = argmin { ϕ π ∗ ( i ) : i ∈ X M − , M + ( π ) } . Ourconstruction consists of eliminating i M − , M + and placing instead all items in C M − appearing in π after i M − , M + ; this alteration results in a permutation ¯ π over Q \ { i M − , M + } . Formally, let A − and ¯ A − be the items appearing after i M − , M + out of C M − and N \ C M − , respectively, i.e., A − = (cid:8) i ∈ C M − : π ( i ) > π ( i M − , M + ) (cid:9) and ¯ A − = (cid:8) i ∈ N \ C M − : π ( i ) > π ( i M − , M + ) (cid:9) . For simplicity, we index the items in A − according to their order within the permutation π ,which results in having A − = { i , . . . , i |A − | } with π ( i ) < · · · < π ( i |A − | ). Now, the modifiedpermutation ¯ π is constructed as follows: • Before i M − , M + : Items in positions 1 , . . . , π ( i M − , M + ) − π remainwithin their original positions, meaning that ¯ π ( i ) = π ( i ) for every item i with π ( i ) ≤ π ( i M − , M + ) − • Instead of i M − , M + : Items in A − will appear in place of i M − , M + following their relativeorder in π . That is, ¯ π ( i k ) = π ( i M − , M + ) − k for every k ∈ [ |A − | ]. • After i M − , M + : Items in ¯ A − will appear after those in A − , again following their relativeorder in π . In other words, ¯ π ( i ) = π ( i ) − |{ k ∈ [ |A − | ] : π ( i k ) > π ( i ) }| for every item i ∈ ¯ A − .In Appendix C.1, we show that the resulting permutation satisfies its desired properties, asformally stated below. Lemma 4.4. The permutation ¯ π satisfies properties 1-3. The recursive construction. We are now ready to explain how recursive applications of thefixing procedure allow us to conclude the proof of Lemma 4.3. At a high level, we bisect the clus-ter indices [ M ], such that in each step the indices being considered are split into their lower half M − and upper half M + , with respect to which the fixing procedure FixCrossing( · , M − , M + )will be applied. The resulting permutation will then be divided into left and right parts, whichare recursively bisected along the same lines.To present the specifics of this bisection as simply as possible, we assume without loss ofgenerality that the number of clusters M is a power of 2; otherwise, empty clusters can beappended to the sequence C , . . . , C M . At the upper level of the recursion, we bisect the entirecollection of cluster indices [ M ] into M [1 , M ] = { , . . . , M } and M [ M +1 ,M ] = { M + 1 , . . . , M } .Designating the optimal permutation by π [1 ,M ] = π ∗ , we employ our fixing procedure withFixCrossing( π [1 ,M ] , M [1 , M ] , M [ M +1 ,M ] ), to obtain the permutation ¯ π [1 ,M ] . Now, we break thelatter into its left and right part, π [1 , M ] and π [ M +1 ,M ] , such that the left permutation π [1 , M ] is25he prefix of ¯ π [1 ,M ] ending at the last item in C M [1 , M ∪ X M [1 , M , M [ M ,M ] ( π [1 ,M ] ), whereas theright permutation π [ M +1 ,M ] is comprised of the remaining suffix.In the second level of the recursion, for the left permutation π [1 , M ] , we bi-sect M [1 , M ] into M [1 , M ] = { , . . . , M } and M [ M +1 , M ] = { M + 1 , . . . , M } , fol-lowed by applying FixCrossing( π [1 , M ] , M [1 , M ] , M [ M +1 , M ] ). Similarly, for the right per-mutation π [ M +1 ,M ] , its corresponding set of cluster indices M [ M +1 ,M ] is bisected into M [ M +1 , M ] = { M + 1 , . . . , M } and M [ M +1 ,M ] = { M + 1 , . . . , M } , in which case we applyFixCrossing( π [ M +1 ,M ] , M [ M +1 , M ] , M [ M +1 ,M ] ). This recursive procedure continues up untilthe resulting sets of cluster indices are singletons. At that point in time, our final permutation π sparse is obtained by concatenating π [1 , , π [2 , , . . . , π [ M,M ] . Analysis. For ease of presentation, we make use of Ω to denote the set of pairs of cluster indexsets with respect to which FixCrossing( · , · , · ) is employed throughout our recursive construction,meaning thatΩ = (cid:26) (cid:16) M [1 , M ] , M [ M +1 ,M ] (cid:17) , [level 1] (cid:16) M [1 , M ] , M [ M +1 , M ] (cid:17) , (cid:16) M [ M +1 , M ] , M [ M +1 ,M ] (cid:17) , [level 2] · · · (cid:0) M [1 , , M [2 , (cid:1) , (cid:0) M [3 , , M [4 , (cid:1) , . . . , (cid:0) M [ M − ,M − , M [ M,M ] (cid:1) (cid:27) . [level log M ]With this notation, we show in the next two claims that the permutation π sparse indeed satisfiesthe sparse crossing and near-optimal profit properties of Lemma 4.3. Lemma 4.5. cross m ( π sparse ) ≤ log Mǫ , for every m ∈ [ M ] . Proof. By construction of π sparse , every item belonging to one of the clusters C m +1 , . . . , C M that appears in this permutation before the last item in cluster C m necessarily resides in X M − , M + ( π [min M − , max M + ] ), for some pair ( M − , M + ) ∈ Ω with m ∈ M − . To verify thisclaim, consider such a crossing item i , say belonging to cluster C m + . By the way our recursiveconstruction of Ω is defined, there exists a unique pair of cluster index sets ( M − , M + ) ∈ Ωfor which m ∈ M − and m + ∈ M + ; we argue that i ∈ X M − , M + ( π [min M − , max M + ] ). Indeed, inthe next recursion level, the left permutation π [min M − , max M − ] is the prefix of ¯ π [min M − , max M + ] ending with the last item in C M − ∪ X M − , M + ( π [min M − , max M + ] ). Furthermore, by construction,all items in the right permutation π [min M + , max M + ] will appear in π sparse after all items in theleft permutation π [min M − , max M − ] . Therefore, since i ∈ C m + with m + ∈ M + and since this itemappears in π sparse before the last item in cluster C m , we know that i appears as part of the leftpermutation π [min M − , max M − ] , implying that i ∈ X M − , M + ( π [min M − , max M + ] ).As any such item i ∈ X M − , M + ( π [min M − , max M + ] ) contributes at most once towardcross M − , M + (¯ π [min M − , max M + ] ), we havecross m ( π sparse ) ≤ X ( M− , M +) ∈ Ω: m ∈M− cross M − , M + (cid:0) ¯ π [min M − , max M + ] (cid:1) ǫ · (cid:12)(cid:12)(cid:8) ( M − , M + ) ∈ Ω : m ∈ M − (cid:9)(cid:12)(cid:12) ≤ log Mǫ . Here, the second inequality holds since cross M − , M + (¯ π [min M − , max M + ] ) ≤ ǫ by property 1 ofthe fixing procedure. The third inequality is obtained by observing that, as the definition ofΩ shows, all sets appearing in a single level of the recursion form a partition of [ M ], implyingthat m ∈ M − for at most one pair ( M − , M + ) in that level. As there are log M levels overall,it follows that |{ ( M − , M + ) ∈ Ω : m ∈ M − }| ≤ log M . Lemma 4.6. Ψ( π sparse ) ≥ (1 − ǫ ) · Ψ( π ∗ ) . Proof. To prove the desired claim, we first establish two auxiliary claims, that will enable usto relate between the profits Ψ( π sparse ) and Ψ( π ∗ ). For ease of presentation, the correspondingproofs can be found in Appendices C.2 and C.3, respectively. Claim 4.7. Ψ( π sparse ) ≥ Ψ( π ∗ ) − ǫ · P ( M − , M + ) ∈ Ω ϕ π ∗ ( X M − , M + ( π [min M − , max M + ] )) . Claim 4.8. For any two distinct pairs ( M − , M +1 ) and ( M − , M +2 ) in Ω , the item sets X M − , M +1 ( π [min M − , max M +1 ] ) and X M − , M +2 ( π [min M − , max M +2 ] ) are disjoint. Consequently, the profit attained by the permutation π sparse can be bounded by noting thatΨ( π sparse ) ≥ Ψ( π ∗ ) − ǫ · X ( M − , M + ) ∈ Ω ϕ π ∗ (cid:0) X M − , M + (cid:0) π [min M − , max M + ] (cid:1)(cid:1) ≥ Ψ( π ∗ ) − ǫ · X i ∈N ϕ π ∗ ( i )= (1 − ǫ ) · Ψ( π ∗ ) , where the first inequality is precisely Claim 4.7, and the second inequality follows from Claim 4.8. Given the sparse-crossing property of the near-optimal permutation π sparse , whose existencehas been established in Lemma 4.3, we turn our attention to formally presenting a dynamicprogramming approach for computing a permutation with a profit of at least (1 − ǫ ) · Ψ( π sparse ). States. Building on the intuition provided in Section 4.1, we remind the reader that eachstate ( m, ψ m , Q >m ) of our dynamic program consists of the following parameters: • The index of the current cluster m , taking values in [ M ] . • The total profit ψ m collected thus far. Initially, ψ m will be treated as a continuousparameter, taking values in [0 , np max ], where p max is the maximum profit attainable byany single item, i.e., p max = max { p it : i ∈ [ n ] , t ∈ [ T ] , and w i ≤ W t } . • The set of items Q >m belonging to clusters C m +1 , . . . , C M that will be crossing into lower-index clusters. Motivated by the sparse-crossing property established in Lemma 4.3, weonly consider sets Q >m of cardinality at most ⌈ log M ⌉ ǫ .27 alue function. For a subset of items S ⊆ N and a permutation π : S → [ | S | ], we say thatthe pair ( S, π ) is thin when cross m ( π ) ≤ ⌈ log M ⌉ ǫ for all m ∈ [ M ]. Given this definition, thevalue function F ( m, ψ m , Q >m ) represents the minimum makespan w ( S ) that can be attainedover all thin pairs ( S, π ) that satisfy the following conditions:1. Allowed items : The set S consists of items that belong to one of the clusters C . . . , C m orto Q >m . In other words, S ⊆ C [1 ,m ] ⊎ Q >m , where C [1 ,m ] = U µ ∈ [1 ,m ] C µ by convention.2. Required crossing items : The set S contains all items in Q >m , meaning that Q >m ⊆ S .3. Total profit : Ψ( π ) ≥ ψ m .Recycling some of the notation introduced in Section 2.3.2, we use Thin( m, ψ m , Q >m ) to denotethe collection of thin pairs that meet conditions 1-3 above. When the latter set is empty, wedefine F ( m, ψ m , Q >m ) = ∞ . With these definitions, Lemma 4.3 proves the existence of a thinpair ( S, π ) ∈ Thin( M, Ψ( π sparse ) , ∅ ) with F ( M, Ψ( π sparse ) , ∅ ) ≤ W T . It is worth pointing outthat, for the item set N sparse over which the permutation π sparse is defined, we can indeed assumethat w ( N sparse ) ≤ W T , as all items whose completion time with respect to π sparse exceeds W T can be eliminated, leaving us with a permutation that still satisfies Lemma 4.3. Therefore, hadwe been able to compute the maximal value ψ ∗ for which F ( M, ψ ∗ , ∅ ) ≤ W T , its correspondingpermutation would have guaranteed a profit of at least ψ ∗ ≥ Ψ( π sparse ) ≥ (1 − ǫ ) · Ψ( π ∗ ). Onceagain, since ψ m is a continuous parameter, we will eventually explain how to discretize ψ m totake polynomially-many values, incurring only an ǫ -loss in profit. Optimal substructure. In what follows, we identify the optimal substructure that allowsus to compute the value function F by means of dynamic programming. To this end, supposethat ( S, π ) is a thin pair that minimizes w ( S ) over Thin( m, ψ m , Q >m ). We will argue thatby eliminating from ( S, π ) a carefully-selected suffix of the permutation π consisting of itemsin clusters C m , . . . , C M , one obtains a thin pair that attains F ( m − , ψ m − , Q >m − ) for anappropriately defined state ( m − , ψ m − , Q >m − ). We proceed by first defining the latter state,for which a suitable alteration of ( S, π ) will be shown to be optimal: • Crossing set : Q >m − is defined as the set of items in C m ⊎ Q >m that appear before thelast item in C , . . . , C m − with respect to the permutation π . Namely, Q >m − = (cid:26) i ∈ S ∩ ( C m ⊎ Q >m ) : π ( i ) < max j ∈ S ∩C [1 ,m − π ( j ) (cid:27) . (8) • Profit requirement : ψ m − = [ ψ m − P i ∈ S \ ( C [1 ,m − ⊎Q >m − ) ϕ π ( i )] + .It is worth pointing out that, for this state to be well-defined, we should ensure that Q >m − indeed consists of at most ⌈ log M ⌉ ǫ items. To understand why this property is satisfied, note thatsince every item in Q >m − appears in the permutation π before the last item in S ∩ C [1 ,m − , wehave |Q >m − | ≤ max µ ∈ [ m − cross µ ( π ) ≤ ⌈ log M ⌉ ǫ , where the last inequality holds since ( S, π ) isthin.Now, let us define the pair ( ˆ S, ˆ π ), in which ˆ S = S ∩ ( C [1 ,m − ⊎ Q >m − ), meaning that ˆ S isthe restriction of S to items belonging to either one of the clusters C , . . . , C m − or to Q >m − . It28s not difficult to verify that any item in ˆ S appears in π before any item in S \ ˆ S , as any item in S ∩ C [ m,M ] that appears before an item in C [1 ,m − is necessarily a member of Q >m − . Therefore,the items in ˆ S form a prefix of π , whereas those in S \ ˆ S form the remaining suffix. Given thisobservation, we define the permutation ˆ π : ˆ S → [ | ˆ S | ] as the former prefix, or equivalently, asthe restriction of π to the items in ˆ S .In Lemma 4.9 below, we show that the pair ( ˆ S, ˆ π ) indeed resides within Thin( m − , ψ m − , Q >m − ). Subsequently, we prove in Lemma 4.10 that this pair is in fact makespan-optimal over the latter set. To avoid deviating from the overall flow of this section, the corre-sponding proofs are presented in Appendices C.4 and C.5, respectively. Lemma 4.9. ( ˆ S, ˆ π ) ∈ Thin( m − , ψ m − , Q >m − ) . Lemma 4.10. w ( ˆ S ) = F ( m − , ψ m − , Q >m − ) . Recursive equations. Given the optimal substructure characterization discussed above, weproceed by explaining how to express F ( m, ψ m , Q >m ) in recursive form. In essence, had weknown what the preceding state ( m − , ψ m − , Q >m − ) is, the remaining question would havebeen that of identifying the lightest set of “extra” items E to be appended, along with theirinternal permutation π E : E → [ |E| ], under a marginal profit constraint. Formally, to capturethe agreement between crossing items, we say that state ( m − , ψ m − , Q >m − ) is conceivablefor state ( m, ψ m , Q >m ) when Q >m − \ C m ⊆ Q >m . In the opposite direction, ( m, ψ m , Q >m ) isreachable from ( m − , ψ m − , Q >m − ) when there exist an item set E and permutation π E : E → [ |E| ] that simultaneously satisfy the following constraints:1. Extra items : The collection of extra items can be written as E = E m ⊎ ( Q >m \ Q >m − ).Here, items in E m are to be picked out of cluster C m , with the exclusion of those appearingin Q >m − , meaning that we have the constraint E m ⊆ C m \ Q >m − . Concurrently, eachand every item in Q >m \ Q >m − should be picked.2. Marginal profit : P i ∈E ϕ π E ( i ) ≥ ψ m − ψ m − , where the term ϕ π E ( i ) denotes the profitof item i with respect to the permutation π E , when its completion time is increasedby F ( m − , ψ m − , Q >m − ). This constraint guarantees that, by appending π E to thepermutation that achieves F ( m − , ψ m − , Q >m − ), we obtain a total profit of at least ψ m .Letting Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] denote the collection of item sets and permutations that satisfythese constraints, we mention in passing that this set may be empty. Moreover, it will be utilizedonly for purposes of analysis, and in particular, we will not assume that Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ]can be efficiently constructed. Nevertheless, the function value F ( m, ψ m , Q >m ) can still beexpressed by minimizing F ( m − , ψ m − , Q >m − ) + w ( E ) over all conceivable states ( m − , ψ m − , Q >m − ) and over all item sets and permutations ( E , π E ) ∈ Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ].For convenience, when F ( m, ψ m , Q >m ) ≤ W T , we use Best( m, ψ m , Q >m ) to denote an arbitrarystate ( m − , ψ m − , Q >m − ) chosen out of those for which the minimum value F ( m, ψ m , Q >m )is attained. As mentioned earlier, we wish to compute the maximal value ψ ∗ that satisfies F ( M, ψ ∗ , ∅ ) ≤ W T , as its corresponding permutation guarantees a profit of at least (1 − ǫ ) · Ψ( π ∗ ).29 pproximate recursion. That said, due to having a lower bound on the marginal profit,even when Best( m, ψ m , Q >m ) is known, the recursive formulation above is expected to identifyan item set and permutation ( E , π E ) ∈ Extra[ ( m,ψ m , Q >m )Best( m,ψ m , Q >m ) ] for which w ( E ) is minimized. Thissetting can be viewed as an “inverse” generalized incremental knapsack problem, where theobjective is to minimize makespan rather than to maximize profit. To deal with this obstacle, weemploy our QPTAS for bounded weight ratio instances (see Section 3) in order to approximatelysolve these recursive equations.Specifically, for ∆ ≥ 0, we say that constraint 2 is ( ǫ, ∆)-satisfied when P i ∈E ϕ +∆ π E ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ), where ϕ +∆ π E ( i ) is the profit of item i with respect to the permutation π E , when its completion time is increased by ∆. As such, the standard sense of satisfying thisconstraint can be recovered by picking ǫ = 0 and ∆ = F ( m − , ψ m − , Q >m − ). With thisdefinition, we say that state ( m, ψ m , Q >m ) is ( ǫ, ∆)-reachable from state ( m − , ψ m − , Q >m − )when there exist an item set E and permutation π E : E → [ |E| ] that satisfy constraint 1 and( ǫ, ∆)-satisfy constraint 2; as before, Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] will stand for the collectionof such item sets and permutations. In what follows, we devise an auxiliary procedure forapproximately solving the recursive equations, as summarized in the next claim; for readabilitypurposes, the proof is deferred to Appendix C.6. Lemma 4.11. Suppose that ( m, ψ m , Q >m ) and ( m − , ψ m − , Q >m − ) are two given states, suchthat F ( m, ψ m , Q >m ) ≤ W T and ( m − , ψ m − , Q >m − ) = Best( m, ψ m , Q >m ) . Given a parameter ∆ ≤ F ( m − , ψ m − , Q >m − ) , we can identify an item set ˆ E and permutation ˆ π ˆ E : ˆ E → [ | ˆ E | ] forwhich:1. ( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] .2. w ( ˆ E ) ≤ F ( m, ψ m , Q >m ) − F ( m − , ψ m − , Q >m − ) .The running time of our algorithm is O (( nT ) O ( ǫ · (log n +log M )) · |I| O (1) ) , regardless of whetherthe assumptions above hold or not. With this procedure in-hand, we define an approximate value function ˆ F , whose state spaceis identical to that of F . However, rather than attempting to solve an inverse generalizedincremental knapsack problem, the recursive equations through which ˆ F is defined will tackle thelatter problem in an approximate way via our auxiliary procedure. To formalize this approach,the function value ˆ F ( m, ψ m , Q >m ) is evaluated as follows: • Terminal states ( m = 0 ) : Here, we simply define ˆ F (0 , ψ , Q > ) = F (0 , ψ , Q > ). While F -values are unknown in general, F (0 , ψ , Q > ) evaluates to either w ( Q > ), when thereexists a permutation π Q > : Q > → [ |Q > | ] with profit Ψ( π Q > ) ≥ ψ , or to ∞ other-wise. This distinction can be made by enumerating over all permutations of Q > in time O (( ǫ log M ) O ( ǫ log M ) ) = O ( |I| O (( ǫ log |I| ) O (1) ) ), since |Q > | ≤ ⌈ log M ⌉ ǫ . • General states ( m ∈ [ M ]): For each state ( m − , ψ m − , Q >m − ), we instantiateLemma 4.11 with ∆ = ˆ F ( m − , ψ m − , Q >m − ), to obtain the item set ˆ E and its per-mutation ˆ π ˆ E : ˆ E → [ | ˆ E| ]. The value ˆ F ( m, ψ m , Q >m ) is determined by minimizingˆ F ( m − , ψ m − , Q >m − ) + w ( ˆ E ) over all conceivable states ( m − , ψ m − , Q >m − ) for30hich ( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ], noting that the latter condition can easily betested.It is important to emphasize that, when employing our auxiliary procedure above, we have noway of knowing a-priori whether the assumptions made in Lemma 4.11 hold or not. Nevertheless,as we show in the next lemma, whose proof is provided in Appendix C.8, any profit requirementwhich is attainable by the original dynamic program F can be attained up to factor 1 − ǫ byour approximate program ˆ F . The precise relationship we establish between these functions canbe formally stated as follows. Lemma 4.12. Let ( m, ψ m , Q >m ) be a state for which F ( m, ψ m , Q >m ) ≤ W T . Then, ˆ F ( m, ψ m , Q >m ) ≤ F ( m, ψ m , Q >m ) , where the makespan ˆ F ( m, ψ m , Q >m ) is attained by an itemset ˆ S m and a permutation ˆ π ˆ S m : ˆ S m → [ | ˆ S m | ] for which: • Allowed and required items : ˆ S m ⊆ C [1 ,m ] ⊎ Q >m and Q >m ⊆ ˆ S m . • Profit : Ψ(ˆ π ˆ S m ) ≥ (1 − ǫ ) · ψ m . As previously mentioned, the primary intent of this section is to compute a permutationwith a profit of at least (1 − ǫ ) · Ψ( π sparse ). To argue that we have nearly achieved this objective,recall that Lemma 4.3 proves the existence of a thin pair ( S, π ) ∈ Thin( M, Ψ( π sparse ) , ∅ ) with F ( M, Ψ( π sparse ) , ∅ ) ≤ W T . Therefore, as an immediate consequence of Lemma 4.12, we inferthat ˆ F ( M, Ψ( π sparse ) , ∅ ) ≤ W T , which is attained by a permutation π with a profit of Ψ( π ) ≥ (1 − ǫ ) · Ψ( π sparse ). The discrete program ˜ F . That said, the above-mentioned existence proof still does notcorrespond to a constructive algorithm, due to the continuity of the profit requirement parameter ψ m . To discretize this parameter, similarly to Section 2.3.3, we restrict ψ m to a finite set ofvalues, D ψ = { d · ǫp max n : d ∈ [ n ǫ ] } . In turn, we use ˜ F ( m, ψ m , Q >m ) to denote the resultingdynamic program over the discretized set of states, whose recursive equations are identical tothose of ˆ F , except for instantiating Lemma 4.11 with ∆ = ˜ F ( m − , ψ m − , Q >m − ).We conclude our analysis by lower-bounding the best-possible profit achievable through thisdynamic program, showing that it indeed matches that of the permutation π sparse up to ǫ -relatedterms. To avoid redundancy, we omit the corresponding proof, as it is nearly identical to thatof Lemma 2.6. Lemma 4.13. There exists a value ˜ ψ ∈ D ψ such that ˜ ψ ≥ (1 − ǫ ) · Ψ( π sparse ) and such that ˜ F ( M, ˜ ψ, ∅ ) ≤ W T . This makespan is attained by an item set ˜ S and a permutation ˜ π ˜ S whoseprofit is Ψ(˜ π ˜ S ) ≥ (1 − ǫ ) · ˜ ψ ≥ (1 − ǫ ) · Ψ( π sparse ) . Running time. We first observe that the function ˜ F ( m, ψ m , Q >m ) is being evaluated over O ( n O ( ǫ log M ) · |I| O (1) ) possible states. To verify this claim, note that there are O ( M ) = O ( |I| )choices for the cluster index m , and that the discretized profit parameter ψ m takes values in D ψ , with |D ψ | = O ( n ǫ ). In addition, the set of crossing items Q >m is of cardinality at most ⌈ log M ⌉ ǫ , implying that there are only O ( n O ( ǫ log M ) ) subsets to consider for this parameter.Now, evaluating ˜ F ( m, ψ m , Q >m ) for a given state depends on its type:31 Terminal states ( m = 0 ) : As previously explained, such states are handled by enumeratingover all permutations of Q > in time O ( |I| O (( ǫ log |I| ) O (1) ) ). • General states ( m ∈ [ M ]): Here, each state ( m − , ψ m − , Q >m − ) would involve a singleapplication of our auxiliary procedure, running in O (( nT ) O ( ǫ · (log n +log M )) · |I| O (1) ) ac-cording to Lemma 4.11. As argued above, there are only O ( n O ( ǫ log M ) · |I| O (1) ) states ofthe form ( m − , ψ m − , Q >m − ) to be considered.Overall, we incur a running time of O ( |I| O (( ǫ log |I| ) O (1) ) ), as stated in Theorem 4.1. References David Adjiashvili, Sandro Bosio, Robert Weismantel, and Rico Zenklusen. Time-expandedpackings. In Proceedings of the 41st International Colloquium on Automata, Languages andProgramming , pages 64–76, 2014. 1Eleni C. Akrida, Jurek Czyzowicz, Leszek Gasieniec, Lukasz Kuszner, and Paul G. Spirakis.Temporal flows in temporal networks. Journal of Computer and System Sciences , 103:46–60,2019. 1Aris Anagnostopoulos, Fabrizio Grandoni, Stefano Leonardi, and Andreas Wiese. A mazing2 + ǫ approximation for unsplittable flow on a path. ACM Transactions on Algorithms , 14(4), 2018. 3Ali Aouad and Danny Segev. An approximate dynamic programming approach to the incre-mental knapsack problem, 2020. Working paper. 2Brenda S. Baker. Approximation algorithms for NP-complete problems on planar graphs. Jour-nal of the ACM , 41(1):153–180, 1994. 21Nikhil Bansal, Amit Chakrabarti, Amir Epstein, and Baruch Schieber. A quasi-PTAS forunsplittable flow on line graphs. In Proceedings of the 38th ACM Symposium on Theory ofComputing , pages 721–729, 2006. 3Jatin Batra, Naveen Garg, Amit Kumar, Tobias M¨omke, and Andreas Wiese. New approxima-tion schemes for unsplittable flow on a path. In Proceedings of the 26th Annual ACM-SIAMSymposium on Discrete Algorithms , pages 47–58, 2015. 3Daniel Bienstock, Jay Sethuraman, and Chun Ye. Approximation algorithms for the incrementalknapsack problem via disjunctive programming, 2013. arXiv preprint arXiv:1311.4563. 2Natashia Boland, Thomas Kalinowski, Hamish Waterer, and Lanbo Zheng. Scheduling arc main-tenance jobs in a network to maximize total flow over time. Discrete Applied Mathematics ,163(1):34–52, 2014. 1Paul Bonsma, Jens Schulz, and Andreas Wiese. A constant-factor approximation algorithm forunsplittable flow on paths. SIAM Journal on Computing , 43(2):767–799, 2014. 332ruia Calinescu, Amit Chakrabarti, Howard Karloff, and Yuval Rabani. An improved approx-imation algorithm for resource allocation. ACM Transactions on Algorithms , 7(4), 2011.3Alberto Caprara. Packing 2-dimensional bins in harmony. In Proceedings of the 43rd AnnualIEEE Symposium on Foundations of Computer Science , pages 490–499, 2002. 1Chandra Chekuri and Sanjeev Khanna. A polynomial time approximation scheme for themultiple knapsack problem. SIAM Journal on Computing , 35(3):713–728, 2005. 3Reuven Cohen, Liran Katzir, and Danny Raz. An efficient approximation for the generalizedassignment problem. Information Processing Letters , 100(4):162–166, 2006. 3Federico Della Croce, Ulrich Pferschy, and Rosario Scatamacchia. Approximating the 3-periodincremental knapsack problem. Journal of Discrete Algorithms , 52:55–69, 2018. 2Federico Della Croce, Ulrich Pferschy, and Rosario Scatamacchia. On approximating the incre-mental knapsack problem. Discrete Applied Mathematics , 264:26–42, 2019. 2Leah Epstein. On bin packing with clustering and bin packing with delays, 2019. arXiv preprintarXiv:1908.06727. 1Yuri Faenza and Igor Malinovic. A PTAS for the time-invariant incremental knapsack problem.In Proceedings of the 5th International Symposium on Combinatorial Optimization , pages157–169, 2018. 2Reza Zanjirani Farahani, Zvi Drezner, and Nasrin Asgari. Single facility location and relocationproblem with time dependent weights and discrete planning horizon. Annals of OperationsResearch , 167:353–368, 2009. 1Uriel Feige and Jan Vondr´ak. The submodular welfare problem with demand queries. Theoryof Computing , 6:247–290, 2010. 3Lisa Fleischer, Michel X. Goemans, Vahab S. Mirrokni, and Maxim Sviridenko. Tight approxi-mation algorithms for maximum separable assignment problems. Mathematics of OperationsResearch , 36(3):416–431, 2011. 3Lester R. Ford and Delbert R. Fulkerson. Maximal flow through a network. Canadian Journalof Mathematics , 8:399–404, 1956. 1Lukas Graf, Tobias Harks, and Leon Sering. Dynamic flows with adaptive route choice. Math-ematical Programming (forthcoming) , 2020. 1Fabrizio Grandoni, Tobias M¨omke, Andreas Wiese, and Hang Zhou. A (5 / ǫ )-approximationfor unsplittable flow on a path: Placing small tasks into boxes. In Proceedings of the 50thAnnual ACM Symposium on Theory of Computing , pages 607–619, 2018. 3Martin Groß, Jan-Philipp W. Kappmeier, Daniel R. Schmidt, and Melanie Schmidt. Approxi-mating earliest arrival flows in arbitrary networks. In Proceedings of the 20th Annual EuropeanSymposium on Algorithms , pages 551–562, 2012. 133effrey R. K. Hartline. Incremental Optimization . PhD thesis, Department of Computer Science,Cornell University, 2008. 2Dorit S. Hochbaum and Wolfgang Maass. Approximation schemes for covering and packingproblems in image processing and VLSI. Journal of the ACM , 32(1):130–136, 1985. 21Anisse Ismaili. Routing games over time with FIFO policy. In Proceedings of the 13th Conferenceon Web and Internet Economics , pages 266–280, 2017. 1Maokai Lin and Patrick Jaillet. On the quickest flow problem in dynamic networks - A para-metric min-cost flow approach. In Proceedings of the 26th Annual ACM-SIAM Symposiumon Discrete Algorithms , pages 1343–1356, 2015. 1Stefan Nickel and Francisco Saldanha-da Gama. Multi-period facility location. In GilbertLaporte, Stefan Nickel, and Francisco Saldanha da Gama, editors, Location Science , pages303–326. Springer International Publishing, 2019. 1Zeev Nutov, Israel Beniaminy, and Raphael Yuster. A (1 − /e )-approximation algorithm forthe generalized assignment problem. Operations Research Letters , 34(3):283–288, 2006. 3Alexa M. Sharp. Incremental Algorithms: Solving Problems in a Changing World . PhD thesis,Department of Computer Science, Cornell University, 2007. 2David B. Shmoys and ´Eva Tardos. An approximation algorithm for the generalized assignmentproblem. Mathematical Programming , 62:461–474, 1993. 2, 3, 6, 11, 13, 15Martin Skutella. An introduction to network flows over time. In William Cook, L´aszl´o Lov´asz,and Jens Vygen, editors, Research Trends in Combinatorial Optimization , pages 451–482.Springer, 2009. 1Chun Ye. On the Trade-offs between Modeling Power and Algorithmic Complexity . PhD thesis,Columbia University, 2016. 2 A Additional Proofs from Section 2 A.1 Proof of Claim 2.5 We first show that ( ˜ S + , ˜ π + ) is indeed a bulky pair. For this purpose, since ( ˜ S, ˜ π ) is bulky, itsuffices to explain why each item i ∈ Q is necessarily k i -heavy, where k i is the unique index forwhich C ˜ π + ( i ) ∈ I k i . This claim follows by noting that, for such items, the way we construct( ˜ S + , ˜ π + ) leads to a completion time of C ˜ π + ( i ) = w ( ˜ S ) + X j ∈ Q : π ( j ) ≤ π ( i ) w j < w ( ˆ S ) + X j ∈ Q : π ( j ) ≤ π ( i ) w j = C π ( i ) . (9)34ecalling that Q = { i ∈ S : C π ( i ) ∈ I k } , we have just shown that k i ≤ k , and since item i is k -heavy due to the bulkiness of ( S, π ), it is k i -heavy as well.We proceed by showing that ( ˜ S + , ˜ π + ) satisfies conditions 1-3:1. Top index : top( ˜ S + , ˜ π + ) ≤ k . To verify this property, note that when Q = ∅ , we clearlyhave w ( ˜ S + ) = w ( ˜ S ) < w ( ˆ S ) = w ( S ), and therefore, top( ˜ S + , ˜ π + ) ≤ top( S, π ) ≤ k . Inthe opposite case, where Q = ∅ , the makespans of both ˜ S + and S are attained by therespective completion times of precisely the same item in Q . However, by inequality (9),we have C ˜ π + ( i ) ≤ C π ( i ) for every i ∈ Q , and it follows that top( ˜ S + , ˜ π + ) ≤ top( S, π ) ≤ k .2. Total profit : Ψ(˜ π + ) ≥ ψ k . Along the same lines, since C ˜ π + ( i ) ≤ C π ( i ) for every i ∈ Q , itfollows that ϕ ˜ π + ( i ) ≥ ϕ π ( i ) for such items. Thus,Ψ (cid:0) ˜ π + (cid:1) = X i ∈ ˜ S ϕ ˜ π + ( i ) + X i ∈ Q ϕ ˜ π + ( i )= X i ∈ ˜ S ϕ ˜ π ( i ) + X i ∈ Q ϕ ˜ π + ( i ) ≥ ψ k − + X i ∈ Q ϕ π ( i )= ψ k − X i ∈ Q ϕ π ( i ) + + X i ∈ Q ϕ π ( i ) ≥ ψ k . Here, the second equality holds since the permutations ˜ π + and ˜ π are identical whenrestricted to items in ˜ S . The first inequality follows by recalling that ( ˜ S, ˜ π ) ∈ Bulky( k − , ψ k − , Q k − ), meaning in particular that P i ∈ ˜ S ϕ ˜ π ( i ) = Ψ(˜ π ) ≥ ψ k − .3. Core : core( ˜ S + ) = Q k . One can easily verify that, for any pair of disjoint sets of items, S and S , we have core( S ∪ S ) = core(core( S ) ∪ core( S )). Therefore,core( ˜ S + ) = core( ˜ S ∪ Q )= core(core( ˜ S ) ∪ core( Q ))= core(core( S \ Q ) ∪ core( Q ))= core( S )= Q k , where the second equality follows by noting that ˜ S and Q are disjoint, and similarly, thefourth equality holds since S \ Q and Q are clearly disjoint. A.2 Proof of Lemma 2.6 Let us consider the sequence of states traversed by the dynamic program F , as it arrivesto the optimal state ( K, ψ ∗ K , Q ∗ K ); the latter is “optimal” in the sense that ψ ∗ K = ψ ∗ and F ( K, ψ ∗ K , Q ∗ K ) < ∞ . This sequence, along with the specific parameters and the bulky pair35orresponding to each state will be designated by:(0 , ψ ∗ , Q ∗ )( S ∗ , π S ∗ ) −−−−−→ Q ∗ ,π Q ∗ (1 , ψ ∗ , Q ∗ )( S ∗ , π S ∗ ) −−−−−→ Q ∗ ,π Q ∗ (2 , ψ ∗ , Q ∗ )( S ∗ , π S ∗ ) −−→ ······ · · · −−−−−→ Q ∗ k ,π Q ∗ k ( k, ψ ∗ k , Q ∗ k )( S ∗ k , π S ∗ k ) −−→ ······ · · · −−−−−→ Q ∗ K ,π Q ∗ K ( K, ψ ∗ K , Q ∗ K )( S ∗ K , π S ∗ K ) . To better understand this illustration, we note that for every k ∈ [ K ], the collection of items Q ∗ k and their internal permutation π Q ∗ k are precisely those by which the dynamic program F transitions from state ( k − , ψ ∗ k − , Q ∗ k − ) to state ( k, ψ ∗ k , Q ∗ k ). Consequently, the resulting itemset is S ∗ k = S ∗ k − ⊎ Q ∗ k , whereas the resulting permutation π S ∗ k is obtained by appending π Q ∗ k to π S ∗ k − . In addition, for the starting state, we have ψ ∗ = 0 and Q ∗ = ∅ .To prove the desired claim, we argue that one feasible sequence of states that can be traversedby the approximate program ˜ F is obtained when each profit parameter ψ ∗ k is substituted by˜ ψ k = ⌈ ψ ∗ k − min { k, | S ∗ k |} · ǫp max n ⌉ D ψ . Here, the operator ⌈·⌉ D ψ rounds its argument up to thenearest value in D ψ . In other words, as shown in Claim A.1 below, we prove that(0 , ˜ ψ , Q ∗ )( S ∗ , π S ∗ ) −−−−−→ Q ∗ ,π Q ∗ (1 , ˜ ψ , Q ∗ )( S ∗ , π S ∗ ) −−−−−→ Q ∗ ,π Q ∗ (2 , ˜ ψ , Q ∗ )( S ∗ , π S ∗ ) −−→ ······ · · · −−−−−→ Q ∗ k ,π Q ∗ k ( k, ˜ ψ k , Q ∗ k )( S ∗ k , π S ∗ k ) −−→ ······ · · · −−−−−→ Q ∗ K ,π Q ∗ K ( K, ˜ ψ K , Q ∗ K )( S ∗ K , π S ∗ K )forms a feasible sequence of states, action parameters, and bulky pairs for ˜ F . That is, we have( S ∗ k , π S ∗ k ) ∈ ^ Bulky( k, ˜ ψ k , Q ∗ k ), for every k ∈ [ K ] . In light of this result, we conclude in particularthat ˜ F ( K, ˜ ψ K , Q ∗ K ) < ∞ with˜ ψ K = l ψ ∗ K − min { K, | S ∗ K |} · ǫp max n m D ψ ≥ ψ ∗ − ǫp max ≥ (1 − ǫ ) · ψ ∗ . Here, the first inequality holds since ψ ∗ K = ψ ∗ and | S ∗ K | ≤ n . To understand the secondinequality, note that for every item i ∈ [ n ], the pair that consists of introducing this item andnothing more is necessarily bulky. Indeed, as a result, the completion time of item i would fallwithin the interval I k i , where k i is the unique integer for which (1 + ǫ ) k i − < w i ≤ (1 + ǫ ) k i .However, since w i > (1 + ǫ ) k i − ≥ ǫ · (1 + ǫ ) k i for ǫ ≤ , it follows that item i is k -heavy,implying in turn that the pair in question is bulky. Now, noting that this pair guarantees aprofit of max { p it : t ∈ [ T ] and w i ≤ W t } , any such expression provides a lower bound on ψ ∗ ,meaning that ψ ∗ ≥ max { p it : i ∈ [ n ] , t ∈ [ T ] , and w i ≤ W t } = p max . Claim A.1. ( S ∗ k , π S ∗ k ) ∈ ^ Bulky( k, ˜ ψ k , Q ∗ k ) , for every k ∈ [ K ] . Proof. We first note that the parameter ˜ ψ k is indeed well-defined for all k ∈ [ K ] , since˜ ψ k ≤ ⌈ ψ ∗ K ⌉ D ψ ≤ np max = max D ψ . Given this observation, we proceed to prove the claim byinduction on k . 36n the base case of k = 0, the claim trivially holds since ˜ ψ = 0, Q ∗ = ∅ , S ∗ = ∅ , and π S ∗ is the empty permutation. In the general case of k ≥ 1, to argue that ( S ∗ k , π S ∗ k ) ∈ ˜ B ( k, ˜ ψ k , Q ∗ k ),we consider two scenarios, depending on whether Q ∗ k is empty or not: • Case 1: Q ∗ k = ∅ . We first observe that, since S ∗ k = S ∗ k − ∪ Q ∗ k , we have S ∗ k = S ∗ k − by thecase hypothesis, implying in turn that ψ ∗ k = ψ ∗ k − and Q ∗ k = Q ∗ k − . Consequently,˜ ψ k = l ψ ∗ k − min { k, | S ∗ k |} · ǫp max n m D ψ ≤ l ψ ∗ k − − min { k − , | S ∗ k − |} · ǫp max n m D ψ = ˜ ψ k − . and it follows that ^ Bulky( k, ˜ ψ k , Q ∗ k ) ⊇ ^ Bulky( k, ˜ ψ k − , Q ∗ k − ) ⊇ ^ Bulky( k − , ˜ ψ k − , Q ∗ k − ),where the first inclusion holds since ˜ ψ k ≤ ˜ ψ k − and Q ∗ k = Q ∗ k − . Thus, ( S ∗ k , π S ∗ k ) =( S ∗ k − , π S ∗ k − ) ∈ ^ Bulky( k − , ˜ ψ k − , Q ∗ k − ) ⊆ ^ Bulky( k, ˜ ψ k , Q ∗ k ), where the middle transitionis precisely our induction hypothesis. • Case 2: Q ∗ k = ∅ . In this case, | S ∗ k | = | S ∗ k − | + | Q ∗ k | ≥ | S ∗ k − | + 1, as S ∗ k is the disjoint unionof S ∗ k − and Q ∗ k . By the inductive hypothesis, ( S ∗ k − , π S ∗ k − ) ∈ ^ Bulky( k − , ˜ ψ k − , Q ∗ k − ),meaning that for the purpose of proving ( S ∗ k , π S ∗ k ) ∈ ^ Bulky( k, ˜ ψ k , Q ∗ k ), it suffices to showthat ˜ ψ k − + P i ∈ Q ∗ k ϕ π S ∗ k ( i ) ≥ ˜ ψ k . We establish the latter inequality by noting that˜ ψ k − + X i ∈ Q ∗ k ϕ π S ∗ k ( i ) = ˜ ψ k − + ψ ∗ k − ψ ∗ k − ≥ (cid:16) ψ ∗ k − − min { k − , | S ∗ k − |} · ǫp max n (cid:17) + ψ ∗ k − ψ ∗ k − ≥ l ψ ∗ k − min { k, | S ∗ k |} · ǫp max n m D ψ = ˜ ψ k , where the first equality holds since ψ ∗ k = ψ ∗ k − + P i ∈ Q ∗ k ϕ π S ∗ k ( i ), by the optimality of ψ ∗ k . A.3 Proof of Lemma 2.7 In order to construct the required permutation, for every k ∈ [ K − π k be an arbitrarypermutation of the items that were assigned by x to bucket B k , i.e., { i ∈ [ n ] : x ik = 1 } . Inaddition, let π − be an arbitrary permutation of the remaining items, i.e., those that were not toassigned to any bucket. The permutation π x is now defined by concatenating these permutationsin order of increasing index, with π − appended at the end, namely, π x = h π , . . . , π K − , π − i . Itis easy to verify that this construction can be implemented in O ( nK ) time.To obtain a lower bound of P i ∈ [ n ] P k ∈ [ K − i ∈ L k +1 q ik x ik on the profit of this permutation,Ψ( π x ) = P i ∈ [ n ] ϕ π x ( i ), note that since each item is assigned to at most one bucket, it sufficesto show that for every i ∈ [ n ] and k ∈ [ K − 1] with x ik = 1, we necessarily have ϕ π x ( i ) ≥ q ik .37or this purpose, we observe that ϕ π x ( i ) = max { p i,t : t ∈ [ T + 1] and W t ≥ C π x ( i ) }≥ max n p i,t : t ∈ [ T + 1] and W t ≥ (1 + ǫ ) k o = q ik , where the inequality above holds since C π x ( i ) ≤ (1 + ǫ ) k . Indeed, this bound on the completiontime of item i can be derived by observing that every item j that appears before i in thepermutation π x (i.e., π x ( j ) < π x ( i )) was assigned by the solution x to one of the buckets B , . . . , B k , and therefore, C π x ( i ) = X j ∈ [ n ]: π x ( j ) ≤ π x ( i ) w j ≤ X κ ∈ [ k ] X j ∈ L κ +1 w j x jκ ≤ X κ ∈ [ k ] capacity( B κ )= X κ ∈ [ k ] (cid:0) (1 + ǫ ) κ − (1 + ǫ ) κ − (cid:1) ≤ (1 + ǫ ) k , where the second inequality follows from the second constraint of (IP). B Additional Proofs from Section 3 B.1 Proof of Lemma 3.3 Clearly, R ∪ G is a chain for I , as each of R and G is such a chain by itself. To verify thefeasibility of R ∪ G , note that for any time period t ∈ [ T ], since R is feasible for I −G we have w ( R t ) ≤ W −G t = min t ≤ τ ≤ T ( W τ − w ( G τ )) ≤ W t − w ( G t ) . By recalling that G ⊆ · · · ⊆ G T and R ⊆ · · · ⊆ R T ⊆ N −G = N \ G T , it follows in particularthat G t and R t are disjoint, implying in turn that w ( R t ∪ G t ) = w ( R t )+ w ( G t ) ≤ W t as required.Now, to account for the profit of R ∪ G , we conclude thatΦ( R ∪ G ) = X t ∈ [ T ] X i ∈ ( R t ∪ G t ) \ ( R t − ∪ G t − ) p it = X t ∈ [ T ] X i ∈ R t \ R t − p it + X i ∈ G t \ G t − p it = Φ( R ) + Φ( G ) . G ⊆· · · ⊆ G T and R ⊆ · · · ⊆ R T ⊆ N \ G T means that ( R t ∪ G t ) \ ( R t − ∪ G t − ) can be written asthe disjoint union of R t \ R t − and G t \ G t − . B.2 Proof of Lemma 3.4 For convenience, let us denote the chain in question by R = S| N \ G . By observing that R T =( S T ∩ ( N \ G )) = ( S T \ G T ) ⊆ N \ G T , it follows that R is also a chain for I −G . We proceed byarguing that R is in fact feasible for the latter instance. To this end, note that for every t ≤ τ , w ( R t ) ≤ w ( R τ )= w ( S τ ) − w ( G τ ) ≤ W τ − w ( G τ ) , where the middle equality follows by recalling that S t is the disjoint union of G t and R t , andthe last inequality is implied by the feasibility of S for I . As a result, w ( R t ) ≤ min t ≤ τ ≤ T ( W τ − w ( G τ )) = W −G t , which proves that R is a feasible chain for I −G .We now turn our attention to showing that Φ( R ) = Φ( S ) − Φ( G ). Again, based on theobservation that S t is the disjoint union of G t and R t for every t ∈ [ T ], we conclude thatΦ( R ) + Φ( G ) = X t ∈ [ T ] X i ∈ R t \ R t − p it + X i ∈ G t \ G t − p it = X t ∈ [ T ] X i ∈ ( R t ∪ G t ) \ ( R t − ∪ G t − ) p it = X t ∈ [ T ] X i ∈ S t \ S t − p it = Φ( S ) . Finally, suppose that S is optimal for I , but on the other hand, R is not optimal for I −G ,meaning that there exists a feasible chain R ′ for I −G with profit Φ( R ′ ) > Φ( R ). Then, byLemma 3.3, we infer that R ′ ∪ G is a feasible chain for I , with profit Φ( R ′ ∪ G ) = Φ( G )+ Φ( R ′ ) > Φ( G ) + Φ( R ) = Φ( S ), contradicting the optimality of S . B.3 Proof of Lemma 3.6 We say that an interval I k is non-empty with respect to the permutation π S ∗ if it contains thecompletion time of at least one item. Note that, since the latter completion time is within[ w min , nw max ] and we assume that w min = 3 (see Section 2.2), the interval I = [0 , 1] isclearly empty. Furthermore, any non-empty interval I k = ((1 + ǫ ) k − , (1 + ǫ ) k ] necessarilyhas ⌊ log ǫ ( w min ) ⌋ ≤ k ≤ ⌈ log ǫ ( nw max ) ⌉ . Therefore, the number of non-empty intervals withrespect to π S ∗ is at most ⌈ log ǫ ( nw max ) ⌉− ⌊ log ǫ ( w min ) ⌋ + 1 ≤ ·⌈ log ǫ ( nρ ) ⌉ . Now, any suchinterval I k is of length (1+ ǫ ) k − (1+ ǫ ) k − , meaning that the number of k -heavy items with a com-pletion time in this interval is at most (1+ ǫ ) k − (1+ ǫ ) k − ǫ · (1+ ǫ ) k ≤ ǫ , as every k -heavy item has a weightof at least ǫ · (1 + ǫ ) k . All in all, we have just shown that | G ∗ heavy | ≤ ·⌈ log ǫ ( nρ ) ⌉ ǫ ≤ nρ ) ǫ .39 .4 Proof of Lemma 3.7 For every item i ∈ N , let t i be its insertion time with respect to the optimal chain S ∗ . Byconvention, for non-inserted items (i.e., those in N \ S ∗ T ), we say that their “insertion time” is T + 1, with a profit of p i,T +1 = 0. As explained during the proof of Lemma 2.2, our constructionof the permutation π S ∗ guarantees that ϕ π S∗ ( i ) ≥ p i,t i for every item i ∈ N . While thisinequality was established for any chain-to-permutation mapping, one can easily notice that, dueto the optimality of S ∗ , we actually have ϕ π S∗ ( i ) = p i,t i for every i ∈ N . Otherwise, there wouldhave been at least one item with ϕ π S∗ ( i ) > p i,t i , implying that Ψ( π S ∗ ) > Φ( S ∗ ). By Lemma 2.2,the permutation π S ∗ can then be mapped to a feasible chain S with Φ( S ) = Ψ( π S ∗ ) > Φ( S ∗ ),contradicting the optimality of S ∗ . Thus, Φ( H ∗ ) = P i ∈ G ∗ heavy p i,t i = P i ∈ G ∗ heavy ϕ π S∗ ( i ) =Ψ heavy ( π S ∗ ). B.5 Proof of Lemma 3.10 We prove the lower bound α r ≥ rr +1 − rδ by induction on r . For r = 0, we have α = 0 and theclaim clearly holds. Now, for r ≥ α r = 1 − δ − α r − ≥ − δ − ( r − r − ( r − δ )= r (1 − δ ) r + 1 + r ( r − δ ≥ r (1 − δ )( r + 1)(1 + ( r − δ )= rr + 1 · (cid:18) − rδ r − δ (cid:19) ≥ rr + 1 − rδ. C Additional Proofs from Section 4 C.1 Proof of Lemma 4.4Sparse ( M − , M + )-crossing. On the one hand, our construction guarantees that the lastitem in C M − appears in position π ( i M − , M + ) − |A − | of the permutation ¯ π . On the otherhand, every item in C M + that appears before this position necessarily belongs to X M − , M + ( π ). Itfollows that there are at most |X M − , M + ( π ) | = ǫ such items, and therefore, cross M − , M + (¯ π ) ≤ ǫ . Completion times. We establish this property by considering three cases, depending onwhether the item in question appears before i M − , M + , belongs to A − , or belongs to ¯ A − . • Before i M − , M + : For every item i ∈ N with π ( i ) ≤ π ( i M − , M + ) − C ¯ π ( i ) = C π ( i ), since the permutations ¯ π and π are identical up to position π ( i M − , M + ) − • Items in A − : For every item i ∈ A − , we have C ¯ π ( i ) ≤ C π ( i ), since the collection of itemsappearing before i in ¯ π is a subset of those appearing before i in π .40 Items in ¯ A − : For every item i ∈ ¯ A − , the important observation is that the collection ofitems appearing before i in ¯ π consists of: (1) The same items appearing before i in π ,except for the eliminated item i M − , M + ; as well as (2) All items in A − appearing after i in π . Therefore, C ¯ π ( i ) ≤ C π ( i ) − w i M− , M + + w ( A − ) ≤ C π ( i ) . To understand the last inequality, recall that i M − , M + ∈ X M − , M + ( π ), meaning in partic-ular that this item resides within C M + . Since I = ( N , W ) is well-spaced, property 2 ofsuch instances implies that w i M− , M + is greater than the weight of any item in C M − by amultiplicative factor of at least n M + − max M − − /ǫ ≥ n , as max M − < min M + .Consequently, since all items in A − reside within C M − , we indeed have w i M− , M + ≥ n · max j ∈C M− w j ≥ w ( A − ). Difference. This property is straightforward, by construction of ¯ π . C.2 Proof of Claim 4.7 For simplicity of notation, let D = { i M − , M + : ( M − , M + ) ∈ Ω , X M − , M + = ∅} be the collectionof items that were removed throughout all recursive calls to our fixing procedure. Then, theprofit of the resulting permutation π sparse can be lower-bounded by observing thatΨ( π sparse ) = X i ∈N \D ϕ π sparse ( i ) ≥ X i ∈N \D ϕ π ∗ ( i )= Ψ( π ∗ ) − X i ∈D ϕ π ∗ ( i ) ≥ Ψ( π ∗ ) − ǫ · X ( M − , M + ) ∈ Ω ϕ π ∗ (cid:0) X M − , M + (cid:0) π [min M − , max M + ] (cid:1)(cid:1) . Here, the first inequality holds since, for any remaining item i ∈ N \D , it is not difficult to verify(by induction on the recursion level) that property 2 of the fixing procedure implies C π sparse ( i ) ≤ C π ∗ ( i ), and we therefore have ϕ π sparse ( i ) ≥ ϕ π ∗ ( i ). The second inequality is obtained by recallingthat any item i M − , M + ∈ D was chosen as the least profitable item in X M − , M + ( π [min M − , max M + ] )with respect to π ∗ , thus ϕ π ∗ ( i M − , M + ) ≤ ϕ π ∗ ( X M − , M + ( π [min M − , max M + ] )) |X M − , M + ( π [min M − , max M + ] ) | = ǫ · ϕ π ∗ (cid:0) X M − , M + (cid:0) π [min M − , max M + ] (cid:1)(cid:1) . C.3 Proof of Claim 4.8 By definition, X M − , M +1 ( π [min M − , max M +1 ] ) and X M − , M +2 ( π [min M − , max M +2 ] ) contain only itemsin M +1 -indexed clusters and M +2 -indexed clusters, respectively. Thus, when M +1 and M +2 aredisjoint, X M − , M +1 ( π [min M − , max M +1 ] ) and X M − , M +2 ( π [min M − , max M +2 ] ) must be disjoint as well.Hence, it remains to consider the scenario where M +1 and M +2 are not disjoint. In this case, the41ermutations π [min M − , max M +1 ] and π [min M − , max M +2 ] must have been created at different levelsof the recursive construction; we assume without loss of generality that π [min M − , max M +1 ] wascreated at a lower-index level. Therefore, M +2 ⊆ M +1 , and X M − , M +2 ( π [min M − , max M +2 ] ) consistsof only items in the right permutation, π [min M +1 , max M +1 ] . On the other hand, by construc-tion, any item in X M − , M +1 ( π [min M − , max M +1 ] ) ends up in the left permutation, π [min M − , max M − ] ,implying the disjointness of X M − , M +1 ( π [min M − , max M +1 ] ) and X M − , M +2 ( π [min M − , max M +2 ] ). C.4 Proof of Lemma 4.9 We first observe that the pair ( ˆ S, ˆ π ) is indeed thin. To this end, note that since the permutationˆ π is a prefix of π , for every m ∈ [ M ] we clearly have cross m (ˆ π ) ≤ cross m ( π ) ≤ ⌈ log M ⌉ ǫ , wherethe last inequality holds since ( S, π ) is thin. Next, we show that ( ˆ S, ˆ π ) satisfies conditions 1-3:1. Allowed items : By construction, ˆ S = S ∩ ( C [1 ,m − ⊎ Q >m − ), implying that ˆ S forms asubset of C [1 ,m − ⊎ Q >m − .2. Required crossing items : An additional implication of our definition of ˆ S is that Q >m − ⊆ ˆ S , since Q >m − ⊆ S by (8).3. Total profit : To obtain a lower bound on the profit of ˆ π , we observe thatΨ(ˆ π ) = X i ∈ ˆ S ϕ ˆ π ( i )= X i ∈ ˆ S ϕ π ( i )= Ψ( π ) − X i ∈ S \ ( C [1 ,m − ⊎Q >m − ) ϕ π ( i ) ≥ ψ m − X i ∈ S \ ( C [1 ,m − ⊎Q >m − ) ϕ π ( i ) + = ψ m − . Here, the second equality holds since ˆ π is a prefix of π , as previously mentioned. Thethird equality follows by noting that S \ ˆ S = S \ ( C [1 ,m − ⊎ Q >m − ). The inequalityabove is obtained by observing that its left-hand-side is non-negative, and by recallingthat ( S, π ) ∈ Thin( m, ψ m , Q >m ), implying that Ψ( π ) ≥ ψ m . The last equality is preciselythe definition of ψ m − . C.5 Proof of Lemma 4.10 By way of contradiction, suppose there exists a pair ( ˜ S, ˜ π ) ∈ Thin( m − , ψ m − , Q >m − ) whosemakespan is smaller than that of ˆ S , namely, w ( ˜ S ) < w ( ˆ S ). We begin by noticing that theitem sets S \ ˆ S and ˜ S are disjoint, since S \ ˆ S ⊆ ( C m ⊎ Q >m ) \ Q >m − ⊆ C [ m,M ] \ Q >m − whereas ˜ S ⊆ C [1 ,m − ⊎ Q >m − , as ( ˜ S, ˜ π ) ∈ Thin( m − , ψ m − , Q >m − ). Taking advantage ofthis observation, we define a new pair ( ˜ S + , ˜ π + ) as follows: • The underlying set of items is given by ˜ S + = ˜ S ⊎ ( S \ ˆ S ).42 The permutation ˜ π + : ˜ S + → [ | ˜ S + | ] is constructed by appending the items in S \ ˆ S to ˜ π ,following their internal order in π .The next claim shows that the resulting pair is a feasible solution to exactly the same subproblemfor which ( S, π ) is optimal. Claim C.1. ( ˜ S + , ˜ π + ) ∈ Thin( m, ψ m , Q >m ) . Proof. First, we show that ( ˜ S + , ˜ π + ) is a thin pair. To this end, for every µ ∈ [ M ] with C µ ∩ ˜ S + = ∅ , let i µ ∈ C µ be the item that appears last in ˜ π + out of this cluster, i.e., i µ =argmax i ∈ ˜ S + ∩C µ ˜ π + ( i ). We proceed by considering two cases: • Item i µ appears in ˜ π : By construction, ˜ π is a prefix of ˜ π + , and therefore cross µ (˜ π + ) =cross µ (˜ π ) ≤ ⌈ log M ⌉ ǫ , where the last inequality holds since ( ˜ S, ˜ π ) is a thin pair. • Item i µ does not appear in ˜ π : In this case, i µ ∈ S \ ˆ S ⊆ C [ m,M ] \ Q >m − , implying that µ ≥ m . Thus, all items in clusters C µ +1 , . . . , C M that appear before i µ in the permutation˜ π + necessarily belong to Q >m , and we conclude that cross µ (˜ π + ) ≤ |Q >m | ≤ ⌈ log M ⌉ ǫ .Next, we show that ( ˜ S + , ˜ π + ) satisfies conditions 1-3:1. Allowed items : First note that ˜ S ⊆ C [1 ,m − ⊎ Q >m − ⊆ C [1 ,m ] ⊎ Q >m , where the firstinclusion holds since ( ˜ S, ˜ π ) ∈ Thin( m − , ψ m − , Q >m − ) and the second follows by defi-nition of Q >m − in (8). In addition, S ⊆ C [1 ,m ] ⊎ Q >m , since ( S, π ) ∈ Thin( m, ψ m , Q >m ).Combining these two observations, we have ˜ S + = ˜ S ⊎ ( S \ ˆ S ) ⊆ C [1 ,m ] ⊎ Q >m as required.2. Required crossing items : To prove Q >m ⊆ ˜ S + , we observe that Q >m ⊆ Q >m − ⊎ ( Q >m \ Q >m − ) ⊆ ˜ S ∪ ( S \ ˆ S )= ˜ S + . To better understand the second inclusion, note that Q >m − ⊆ ˜ S , since ( ˜ S, ˜ π ) ∈ Thin( m − , ψ m − , Q >m − ). In addition, Q >m \ Q >m − ⊆ S \ ˆ S , since Q >m ⊆ S due to having( S, π ) ∈ Thin( m, ψ m , Q >m ), and since ( Q >m \ Q >m − ) ∩ ˆ S = ∅ , due to having Q >m ⊆C [ m +1 ,M ] and ˆ S ⊆ C [1 ,m − ⊎ Q >m − , where the latter inclusion holds since ˆ S ∈ Thin( m − , ψ m − , Q >m − ).3. Total profit : By construction, any item i ∈ S \ ˆ S appears in the permutation ˜ π + after allitems in ˜ S , and moreover, the internal order between the items in S \ ˆ S is determinedaccording to π . Hence, we can bound the completion time of any item i ∈ S \ ˆ S by notingthat C ˜ π + ( i ) = w ( ˜ S ) + X j ∈ S \ ˆ S : π ( j ) <π ( i ) w j < w ( ˆ S ) + X j ∈ S \ ˆ S : π ( j ) <π ( i ) w j C π ( i ) , where the inequality above follows from our initial assumption that w ( ˜ S ) < w ( ˆ S ). Con-sequently, ϕ ˜ π + ( i ) ≥ ϕ π ( i ) for such items, and we haveΨ (cid:0) ˜ π + (cid:1) = Ψ (˜ π ) + X i ∈ S \ ˆ S ϕ ˜ π + ( i ) (10) ≥ ψ m − + X i ∈ S \ ˆ S ϕ π ( i ) (11)= ψ m − X i ∈ S \ ( C [1 ,m − ⊎Q >m − ) ϕ π ( i ) + + X i ∈ S \ ˆ S ϕ π ( i ) (12) ≥ ψ m − X i ∈ S \ ( C [1 ,m − ⊎Q >m − ) ϕ π ( i ) − X i ∈ S \ ˆ S ϕ π ( i ) = ψ m . (13)Here, equality (10) holds since ˜ π is a prefix of ˜ π + , with the items in S \ ˆ S formingthe remaining suffix. Inequality (11) holds since ( ˜ S, ˜ π ) ∈ Thin( m − , ψ m − , Q m − ),meaning that Ψ(˜ π ) ≥ ψ m − , and since ϕ ˜ π + ( i ) ≥ ϕ π ( i ) for all i ∈ S \ ˆ S , as shown above.Equality (12) follows from the definition of ψ m − . Equality (13) is obtained by notingthat S \ ( C [1 ,m − ⊎ Q >m − ) = S \ ˆ S .Consequently, by combining our initial assumption that w ( ˜ S ) < w ( ˆ S ) along with Claim C.1,we have just identified a pair ( ˜ S + , ˜ π + ) ∈ Thin( m, ψ m , Q >m ) with a makespan of w ( ˜ S + ) = w ( ˜ S ) + w ( S \ ˆ S ) < w ( ˆ S ) + w ( S \ ˆ S )= w ( S ) , contradicting the fact that ( S, π ) minimizes w ( S ) over the set Thin( m, ψ m , Q >m ). C.6 Proof of Lemma 4.11Overview. Prior to delving into the nuts-and-bolts of our approach, we provide a high-leveloverview of its main ideas. For this purpose, to make sure condition 2 of Lemma 4.11 is satisfied,meaning that the item set ˆ E we compute has a total weight of at most F ( m, ψ m , Q >m ) − F ( m − , ψ m − , Q >m − ), our algorithm relies on “knowing” the latter difference, which willbe justified through binary search. With this limitation, restricting ourselves to the item set( C m ⊎ Q >m ) \ Q >m − , we aim to identify a feasible chain whose associated permutation ( ǫ, ∆)-satisfies constraint 2. To this end, our algorithm “guesses” the insertion time of every item in Q >m \ Q >m − by enumerating over all feasible chains G = ( G , . . . , G T ) whose set of introduceditems is G T = Q >m \Q >m − . Since there are at most ⌈ log M ⌉ ǫ such items, the number of required44uesses is only O ( T O ( log Mǫ ) ). For each guess, we construct the residual generalized incrementalknapsack instance, as explained in Section 3.1, which will be solved to near-optimality via theapproximation scheme proposed in Theorem 3.1. Algorithm. For ease of presentation, on top of all input ingredients mentioned in Lemma 4.11,we feed into the upcoming algorithm an additional parameter ω ≥ 0, whose role will be explainedlater on. With this parameter, our algorithm operates as follows:1. We define the generalized incremental knapsack instance ˆ I ω = ( ˆ N , ˆ W ω ), where: • The set of items ˆ N is comprised of those allowed by constraint 1, namely, ˆ N =( C m ⊎ Q >m ) \ Q >m − . • Additionally, we reduce the capacity W t of each period t ∈ [ T ] by ∆, while en-suring that the maximum resulting capacity does not exceed ω , meaning thatˆ W ωt = min { [ W t − ∆] + , ω } .2. For every feasible chain G = ( G , . . . , G T ) for the instance ˆ I ω with G T = Q >m \ Q >m − ,we construct the residual instance ˆ I ω, −G = ( ˆ N −G , ˆ W ω, −G ). The approximation scheme weproposed in Section 3 is now applied to this instance, thereby obtaining a feasible chain R G whose profit is within factor 1 − ǫ of the residual optimum (see Theorem 3.1). Whenthere are no feasible chains with G T = Q >m \ Q >m − , we abort and report this finding.3. Out of all chains G considered in step 2, let G ω be the one for which the sum of profitsΦ( G ω ) + Φ( R G ω ) is maximized. The item set we return is E ω = R G ω T ⊎ ( Q >m \ Q >m − ),i.e., all items inserted by the chain R G ω along with those in Q >m \ Q >m − . We define thecorresponding permutation π E ω : E ω → [ |E ω | ] as the one constructed by Lemma 2.2 for thechain G ω ∪ R G ω . The binary search. We assume without loss of generality that all item weights take integervalues. This property can easily be enforced by uniform scaling, which produces an equivalentinstance whose input length is polynomial in that of the original instance. Now, knowing inadvance that the total weight of any item set is an integer within [0 , w ( N )], we employ our ω -parameterized algorithm to conduct a binary search over this interval, with the objective ofidentifying the smallest integer ω min such that: • For ω min , the algorithm returns a permutation π E ω min that satisfies P i ∈E ω min ϕ +∆ π E ω min ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ). • In contrast, for ω min − / 2, the algorithm either aborts at step 2, or returns a permutation π E ω min − / satisfying P i ∈E ω min − / ϕ +∆ π E ω min − / ( i ) < (1 − ǫ ) · ( ψ m − ψ m − ).To verify that this search procedure is well-defined, let us examine the endpoints of [0 , w ( N )].For ω = 0, if we obtain a permutation π E that satisfies P i ∈E ϕ +∆ π E ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ),our immediate conclusion is that ω min = 0. For ω = w ( N ), as shown in Lemma C.2 below,we are guaranteed to obtain a permutation π E w ( N ) that satisfies P i ∈E w ( N ) ϕ +∆ π E w ( N ) ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ). 45 unning time. Clearly, the number of binary search iterations we incur is linear in the inputsize. Now, within each iteration, since there are O ( T ) guesses for the insertion time of everyitem i ∈ Q >m \Q >m − and since |Q >m \Q >m − | ≤ ⌈ log M ⌉ ǫ , there are only O ( T O ( log Mǫ ) ) chains G to consider in step 2. The crucial observation is that, for each such chain, the residual instanceˆ I ω, −G is defined over the set of itemsˆ N −G = ˆ N \ G T = (( C m ⊎ Q >m ) \ Q >m − ) \ ( Q >m \ Q >m − ) ⊆ C m \ Q >m − ⊆ C m . (14)Thus, ˆ I ω, −G is in fact a single-cluster instance, where the weights of any two items differ by amultiplicative factor of at most n /ǫ , by property 1 of well-spaced instances (see Section 4.1).By Theorem 3.1, the running time of our approximation scheme for such instances is trulyquasi-polynomial, being O (( nT ) O ( ǫ · log n ) · |I| O (1) ) . All in all, we incur a running time of O (( nT ) O ( ǫ · (log n +log M )) · |I| O (1) ), with room to spare. Final solution and analysis. In the remainder of this section, we argue that the itemset E ω min and its permutation π E ω min : E ω min → [ |E ω min | ] satisfy the properties required byLemma 4.11. For this purpose, recalling that the latter lemma assumes F ( m, ψ m , Q >m ) ≤ W T and ( m − , ψ m − , Q >m − ) = Best( m, ψ m , Q >m ), let E ∗ and π ∗E ∗ : E ∗ → [ |E ∗ | ] be the item setand permutation attaining the minimum makespan w ( E ∗ ) over Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ], notingthat by definition, F ( m, ψ m , Q >m ) = F ( m − , ψ m − , Q >m − ) + w ( E ∗ ) . (15)At the heart of our analysis lies the following claim, showing that whenever the ω -parameterizedalgorithm is employed with ω ≥ w ( E ∗ ), we obtain a permutation whose ∆-shifted profit is atleast (1 − ǫ ) · ( ψ m − ψ m − ). We provide the proof in Appendix C.7. Lemma C.2. For any ω ≥ w ( E ∗ ) , the ω -parameterized algorithm computes an item set E ω anda permutation π E ω : E ω → [ |E ω | ] that satisfy P i ∈E ω ϕ +∆ π E ω ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ) . With this result in place, the properties required by Lemma 4.11 can easily be established,as we show next. Lemma C.3. The item set E ω min and permutation π E ω min satisfy properties 1 and 2. Proof. We begin by explaining why ( E ω min , π E ω min ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ], as stated inproperty 1: • Constraint 1 is satisfied : We first show that E ω min ⊆ ( C m ⊎ Q >m ) \ Q >m − and Q >m \Q >m − ⊆ E ω min . Since the item set in question is defined in step 3 as E ω min = R G ω min T ⊎ ( Q >m \ Q >m − ), it suffices to explain why R G ω min T ⊆ C m \ Q >m − . The latter inclusionfollows by noting that R G ω min is a feasible chain for the instance ˆ I ω min , −G ω min , where theset of items is ˆ N −G ω min ⊆ C m \ Q >m − , as shown in the first inclusion of (14).46 Constraint 2 is ( ǫ, ∆) -satisfied : To argue that P i ∈E ω min ϕ +∆ π E ω min ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ),following Lemma C.2, there exists a value ω ≤ w ( N ) for which P i ∈E ω ϕ +∆ π E ω ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ), and the desired claim is implied by the termination condition of our binarysearch.We now turn our attention to proving that w ( E ω min ) ≤ F ( m, ψ m , Q >m ) − F ( m − , ψ m − , Q >m − ),as stated in property 2. To this end, since F ( m, ψ m , Q >m ) = F ( m − , ψ m − , Q >m − ) + w ( E ∗ )by equation (15), it remains to argue that w ( E ω min ) ≤ w ( E ∗ ). To verify this relation, note that w ( E ω min ) = w (cid:0) R G ω min T ⊎ ( Q >m \ Q >m − ) (cid:1) = w (cid:0) R G ω min T (cid:1) + w (cid:0) G ω min T (cid:1) ≤ ˆ W T = min (cid:8) [ W T − ∆] + , ω min (cid:9) ≤ ω min ≤ w ( E ∗ ) . Here, the second equality holds since G ω min T = Q >m \ Q >m − , as stated in step 2. The firstinequality follows by observing that the chain R G ω min ∪ G ω min is feasible for ˆ I ω min , due toLemma 3.3, meaning in particular that for period T we have w ( R G ω min T ) + w ( G ω min T ) ≤ ˆ W T .The final inequality is derived by combining Lemma C.2 and the termination condition of ourbinary search. C.7 Proof of Lemma C.2Constructing a feasible chain for ˆ I ω . With respect to the item set E ∗ and permutation π ∗E ∗ , let us define a chain S ∗ for the instance ˆ I ω as follows: • The collection of inserted items is S ∗ T = E ∗ . • The insertion time t i of each item i ∈ S ∗ T is the one maximizing p it i over { t ∈ [ T ] : W t ≥ F ( m − , ψ m − , Q >m − ) + C π ∗E∗ ( i ) } . Note that the latter set is indeed non-empty, since C π ∗E∗ ( i ) ≤ w ( E ∗ )= F ( m, ψ m , Q >m ) − F ( m − , ψ m − , Q >m − ) ≤ W T − F ( m − , ψ m − , Q >m − ) , where the equality above is exactly (15), and the last inequality holds since F ( m, ψ m , Q >m ) ≤ W T , as assumed in Lemma 4.11.The next claim establishes the feasibility and profit guarantee of S ∗ with respect to ˆ I ω . Below,Φ ω ( · ) stands for the profit function with respect to this instance. Claim C.4. The chain S ∗ is feasible for ˆ I ω , with a profit of Φ ω ( S ∗ ) = P i ∈E ∗ ϕ π ∗E∗ ( i ) . Proof. To prove the feasibility of S ∗ , we first observe that, for every time period t ∈ [ T ], w ( S ∗ t ) ≤ w ( E ∗ ) ≤ ω , (16)47here the first inequality holds since S ∗ t ⊆ S ∗ T = E ∗ , and the second inequality is preciselywhat Lemma C.2 assumes. In addition, by definition of S ∗ , every item i ∈ S ∗ t is associated witha completion time of C π ∗E∗ ( i ) ≤ W t − F ( m − , ψ m − , Q >m − ). Thus, when the latter differenceis negative, we have S ∗ t = ∅ and therefore w ( S ∗ t ) = 0 ≤ [ W t − ∆] + . In the opposite case, w ( S ∗ t ) ≤ W t − F ( m − , ψ m − , Q >m − ) ≤ W t − ∆ ≤ [ W t − ∆] + , (17)where the second inequality holds since ∆ ≤ F ( m − , ψ m − , Q >m − ), as assumed in Lemma 4.11.Putting together inequalities (16) and (17), we have w ( S ∗ t ) ≤ min { [ W t − ∆] + , ω } = ˆ W ωt ,meaning that the chain S ∗ is indeed feasible for ˆ I ω .Now, to derive the profit guarantee Φ ω ( S ∗ ) = P i ∈E ∗ ϕ π ∗E∗ ( i ), we observe that since Φ ω ( S ∗ ) = P i ∈E ∗ p it i , it suffices to show that p it i = ϕ π ∗E∗ ( i ) for each item i ∈ E ∗ . To this end, note thatour choice for the insertion time t i of each item i ∈ E ∗ exactly follows the definition of ϕ π ∗E∗ ( i ),implying that p it i = ϕ π ∗E∗ ( i ). Concluding the proof. Having established this claim, we are now ready to show that theitem set E ω and permutation π E ω satisfy P i ∈E ω ϕ +∆ π E ω ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ). For this purpose,similarly to Φ ω ( · ), let Ψ ω ( · ) be the profit function of a given permutation with respect to theinstance ˆ I ω in its sequencing formulation. With this notation, we obtain the required lowerbound by arguing that X i ∈E ω ϕ +∆ π E ω ( i ) = Ψ ω ( π E ω ) ≥ Φ ω (cid:0) G ω ∪ R G ω (cid:1) ≥ (1 − ǫ ) · ( ψ m − ψ m − ) . We prove the first equality and second inequality in Claims C.5 and C.6, respectively. Tounderstand the first inequality, recall that the permutation π E ω is constructed in step 3 accordingto Lemma 2.2 for the chain G ω ∪ R G ω , which guarantees Ψ ω ( π E ω ) ≥ Φ ω ( G ω ∪ R G ω ). Claim C.5. P i ∈E ω ϕ +∆ π E ω ( i ) = Ψ ω ( π E ω ) . Proof. Let us use ϕ ωπ E ω ( i ) to denote the profit contribution of item i with respect to thepermutation π E ω in the instance ˆ I ω . In other words, ϕ ωπ E ω ( i ) = max { p it : t ∈ [ T + 1] and ˆ W t ≥ C π E ω ( i ) } . With this notation, we have Ψ ω ( π E ω ) = P i ∈E ω ϕ ωπ E ω ( i ), meaning that to prove thedesired equality, it remains to show that ϕ +∆ π E ω ( i ) = ϕ ωπ E ω ( i ) for every item i ∈ E ω . To verify thisclaim, note that ϕ +∆ π E ω ( i ) = max (cid:8) p it : t ∈ [ T + 1] and W t − ∆ ≥ C π E ω ( i ) (cid:9) = max (cid:8) p it : t ∈ [ T + 1] and [ W t − ∆] + ≥ C π E ω ( i ) (cid:9) = max (cid:8) p it : t ∈ [ T + 1] and min { [ W t − ∆] + , ω } ≥ C π E ω ( i ) (cid:9) = max n p it : t ∈ [ T + 1] and ˆ W t ≥ C π E ω ( i ) o ϕ ωπ E ω ( i ) . Here, the second equality holds since C π E ω ( i ) ≥ 0. The third equality is obtained by notingthat C π E ω ( i ) ≤ w ( E ω ) = w ( G ωT ) + w ( R G ω T ) ≤ ˆ W T ≤ ω , where the equality follows by definitionof E ω and the second inequality is implied by the feasibility of G ω ∪ R G ω for the instance ˆ I ω .The last two equalities follow from the definitions of ˆ W t and ϕ ωπ E ω ( i ). Claim C.6. Φ ω ( G ω ∪ R G ω ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ) . Proof. We begin by noting that since ( E ∗ , π ∗E ∗ ) ∈ Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ], this item set andpermutation necessarily satisfy constraint 1, which informs us that E ∗ ⊆ ( C m ⊎ Q >m ) \ Q >m − and Q >m \ Q >m − ⊆ E ∗ . As a result, recalling that the collection of items introduced by thechain S ∗ is precisely E ∗ , it follows that the latter chain can be expressed as S ∗ = S ∗ | Q >m \Q >m − ∪S ∗ | C m \Q >m − . We remind the reader that, based on the terminology of Section 3, the first term S ∗ | Q >m \Q >m − is the restriction of S ∗ to the items in Q >m \ Q >m − , whereas the second term S ∗ | C m \Q >m − is its restriction to C m \ Q >m − .The crucial observation is that, since the chain S ∗ introduces all items in Q >m \ Q >m − , itsrestriction G ∗ = S ∗ | Q >m \Q >m − is necessarily considered in step 2 of our algorithm; moreover, S ∗ | C m \Q >m − constitutes a feasible chain for the residual instance ˆ I ω, −G ∗ , by Lemma 3.4. Assuch, the corresponding chain R G ∗ we compute for the latter instance is guaranteed to have aprofit of Φ ω ( R G ∗ ) ≥ (1 − ǫ ) · Φ ω ( S ∗ | C m \Q >m − ). Consequently, since the chain G ω is the onemaximizing Φ ω ( G ω ) + Φ ω ( R G ω ) over all chains considered in step 2, we conclude that G ω ∪ R G ω is a feasible chain for ˆ I ω with a profit ofΦ ω (cid:0) G ω ∪ R G ω (cid:1) = Φ ω ( G ω ) + Φ ω (cid:0) R G ω (cid:1) ≥ Φ ω ( G ∗ ) + Φ ω (cid:0) R G ∗ (cid:1) ≥ Φ ω (cid:0) S ∗ | Q >m \Q >m − (cid:1) + (1 − ǫ ) · Φ ω (cid:0) S ∗ | C m \Q >m − (cid:1) ≥ (1 − ǫ ) · Φ ω ( S ∗ )= (1 − ǫ ) · X i ∈E ∗ ϕ π ∗E∗ ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ) . Here, the first and second equalities follow from Lemma 3.3 and Claim C.4, respectively. The lastinequality holds since ( E ∗ , π ∗E ∗ ) ∈ Extra[ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] by definition, and hence, constraint 2is necessarily satisfied. C.8 Proof of Lemma 4.12 We prove the lemma by induction on m . Base case: m = 0. In this case, for any state with F (0 , ψ , Q > ) ≤ W T , we actually haveˆ F (0 , ψ , Q > ) = F (0 , ψ , Q > ), by the way terminal states of ˆ F are handled. In addition, lettingˆ π ˆ S be the permutation of ˆ S = Q > that attains ˆ F (0 , ψ , Q > ), it follows that ˆ S ⊆ C [1 , ⊎ Q > , Q > ⊆ ˆ S , and Ψ(ˆ π ˆ S ) ≥ ψ , again by definition.49 eneral case: m ≥ Let ( m, ψ m , Q >m ) be a state for which F ( m, ψ m , Q >m ) ≤ W T . Wefirst show that ˆ F ( m, ψ m , Q >m ) ≤ F ( m, ψ m , Q >m ). To this end, recall that the function valueˆ F ( m, ψ m , Q >m ) is determined by minimizing ˆ F ( m − , ψ m − , Q >m − ) + w ( ˆ E ) over all conceiv-able states ( m − , ψ m − , Q >m − ), where the item set ˆ E and its permutation ˆ π ˆ E : ˆ E → [ | ˆ E| ] areobtained by instantiating Lemma 4.11 with ∆ = ˆ F ( m − , ψ m − , Q >m − ) and satisfy ( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ]. Therefore, specifically for the state ( m − , ψ ∗ m − , Q ∗ >m − ) =Best( m, ψ m , Q >m ), we have ∆ = ˆ F ( m − , ψ ∗ m − , Q ∗ >m − ) ≤ F ( m − , ψ ∗ m − , Q ∗ >m − ) by theinduction hypothesis. In turn, our auxiliary procedure computes a corresponding item set andpermutation ( ˆ E ∗ , ˆ π ∗ ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ ∗ m − , Q ∗ >m − ) ] with total weight w ( ˆ E ∗ ) ≤ F ( m, ψ m , Q >m ) − F ( m − , ψ ∗ m − , Q ∗ >m − ), as guaranteed by Lemma 4.11. Consequently,ˆ F ( m, ψ m , Q >m ) ≤ ˆ F (cid:0) m − , ψ ∗ m − , Q ∗ >m − (cid:1) + w (cid:16) ˆ E ∗ (cid:17) ≤ F (cid:0) m − , ψ ∗ m − , Q ∗ >m − (cid:1) + (cid:0) F ( m, ψ m , Q >m ) − F (cid:0) m − , ψ ∗ m − , Q ∗ >m − (cid:1)(cid:1) = F ( m, ψ m , Q >m ) , which is precisely the required upper bound on ˆ F ( m, ψ m , Q >m ).Next, we show that ˆ F ( m, ψ m , Q >m ) is attained by an item set ˆ S m and a permutationˆ π ˆ S m satisfying ˆ S m ⊆ C [1 ,m ] ⊎ Q >m , Q >m ⊆ ˆ S m , and Ψ(ˆ π ˆ S m ) ≥ (1 − ǫ ) · ψ m . For thispurpose, let ( m − , ψ m − , Q >m − ), ˆ E , and ˆ π ˆ E be the conceivable state, item set, and per-mutation at which ˆ F ( m, ψ m , Q >m ) = ˆ F ( m − , ψ m − , Q >m − ) + w ( ˆ E ) is attained, mean-ing in particular that Q >m − \ C m ⊆ Q >m by definition of conceivable states, and that( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] by the way general states of ˆ F are handled. We pro-ceed by observing that, by the induction hypothesis, ˆ F ( m − , ψ m − , Q >m − ) is attained by anitem set ˆ S m − and a permutation ˆ π ˆ S m − satisfying ˆ S m − ⊆ C [1 ,m − ⊎ Q >m − , Q >m − ⊆ ˆ S m − ,and Ψ(ˆ π ˆ S m − ) ≥ (1 − ǫ ) · ψ m − . With these ingredients, let us define the item set ˆ S m andpermutation ˆ π ˆ S m as follows: • The item set ˆ S m is given by ˆ S m = ˆ S m − ⊎ ˆ E . To understand why ˆ S m − and ˆ E aredisjoint, recall that ( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ], which implies by constraint 1that ˆ E ⊆ ( C m ⊎ Q >m ) \ Q >m − ⊆ C [ m,M ] \ Q >m − ; however, ˆ S m − ⊆ C [1 ,m − ⊎ Q >m − by the induction hypothesis. These observations allow us to concurrently argue thatˆ S m ⊆ C [1 ,m ] ⊎ Q >m as required, since ˆ E ⊆ ( C m ⊎ Q >m ) \ Q >m − ⊆ C [1 ,m ] ⊎ Q >m and sinceˆ S m − ⊆ C [1 ,m − ⊎ Q >m − ⊆ C [1 ,m ] ⊎ ( Q >m − \ C m ) ⊆ C [1 ,m ] ⊎ Q >m , where the last inclusion follows by noting that Q >m − \ C m ⊆ Q >m due to state ( m − , ψ m − , Q >m − ) being conceivable. In addition, Q >m ⊆ Q >m − ⊎ ( Q >m \ Q >m − ) ⊆ ˆ S m − ⊎ ˆ E 50 ˆ S m , where the second inclusion holds since Q >m − ⊆ ˆ S m − by the induction hypothesis andsince Q >m \ Q >m − ⊆ ˆ E , again by constraint 1. • To define the permutation ˆ π ˆ S m : ˆ S m → [ | ˆ S m | ], we simply append ˆ π ˆ E to ˆ π ˆ S m − . As a result,we obtain a profit ofΨ (cid:16) ˆ π ˆ S m (cid:17) = Ψ (cid:16) ˆ π ˆ S m − (cid:17) + X i ∈ ˆ E ϕ + w ( ˆ S m − )ˆ π ˆ E ( i )= Ψ (cid:16) ˆ π ˆ S m − (cid:17) + X i ∈ ˆ E ϕ +∆ˆ π ˆ E ( i ) ≥ (1 − ǫ ) · ψ m − + (1 − ǫ ) · ( ψ m − ψ m − )= (1 − ǫ ) · ψ m . Here, the second equality holds since w ( ˆ S m − ) = ˆ F ( m − , ψ m − , Q >m − ) = ∆. Tounderstand the inequality above, note that Ψ(ˆ π ˆ S m − ) ≥ (1 − ǫ ) · ψ m − by the induc-tive hypothesis, and in addition, P i ∈ ˆ E ϕ +∆ˆ π ˆ E ( i ) ≥ (1 − ǫ ) · ( ψ m − ψ m − ), since ( ˆ E , ˆ π ˆ E ) ∈ Extra ǫ, ∆ [ ( m,ψ m , Q >m )( m − ,ψ m − , Q >m − ) ] implies that constraint 2 is ( ǫ,ǫ,