Submodular maximization with uncertain knapsack capacity
aa r X i v : . [ c s . D S ] M a r Submodular maximization with uncertain knapsack capacity
Yasushi KawaseTokyo Institute of Technology, Tokyo, Japan. Email: [email protected] SumitaNational Institute of Informatics, Tokyo, Japan. JST, ERATO, Kawarabayashi Large GraphProject. Email: [email protected] FukunagaRIKEN Center for Advanced Intelligence Project, Tokyo, Japan.Email: [email protected]
Abstract
We consider the maximization problem of monotone submodular functions under an uncertain knapsack constraint.Specifically, the problem is discussed in the situation that the knapsack capacity is not given explicitly and can beaccessed only through an oracle that answers whether or not the current solution is feasible when an item is addedto the solution. Assuming that cancellation of the last item is allowed when it overflows the knapsack capacity, wediscuss the robustness ratios of adaptive policies for this problem, which are the worst case ratios of the objectivevalues achieved by the output solutions to the optimal objective values. We present a randomized policy of robustnessratio ( − / e )/
2, and a deterministic policy of robustness ratio 2 ( − / e )/
21. We also consider a universal policythat chooses items following a precomputed sequence. We present a randomized universal policy of robustness ratio ( − / √ e )/
2. When the cancellation is not allowed, no randomized adaptive policy achieves a constant robustnessratio. Because of this hardness, we assume that a probability distribution of the knapsack capacity is given, andconsider computing a sequence of items that maximizes the expected objective value. We present a polynomial-timerandomized algorithm of approximation ratio ( − / √ e )/ − ǫ for any small constant ǫ > The submodular maximization is one of the most well-studied combinatorial optimization problems. Since it capturesan essential part of decision-making situations, it has a huge number of applications in diverse areas of computerscience. Nevertheless, the standard setting of the submodular maximization problem fails to capture several realisticsituations. For example, let us consider choosing several items to maximize a reward represented by a submodularfunction subject to a resource limitation. When the amount of the available resource is exactly known, this problem isformulated as the submodular maximization problem with a knapsack constraint. However, in many practical cases,precise information on the available resource is not given. Thus, algorithms for the standard submodular maximizationproblem cannot be applied to this situation. Motivated by this fact, we study robust maximization of submodularfunctions with an uncertain knapsack capacity. Besides the practical applications, it is interesting to study this problembecause it shows how much robustness can be achieved for an uncertain knapsack capacity in submodular maximization.More specifically, we study the submodular maximization problem with an unknown knapsack capacity (SMPUC).In this problem, we are given a set I of items, and a monotone nonnegative submodular function f : 2 I → R + suchthat f (∅) =
0, where each item i ∈ I is associated with a size s ( i ) . The objective is to find a set of items that maximizesthe submodular function subject to a knapsack constraint, but we assume that the knapsack capacity is unknown. Wehave access to the knapsack capacity through an oracle; we add items to the knapsack one by one, and we see whether1r not the selected items violates the knapsack constraint only after the addition. If a selected item fits the knapsack,the selection of this item is irrevocable. When the total size of the selected items exceeds the capacity, there are twosettings according to whether or not the last selection can be canceled. If the cancellation is allowed, then we removethe last selected item from the knapsack, and continue adding the remaining items to the knapsack. In the other setting,we stop the selection, and the final output is defined as the item set in the knapsack before adding the last item.For the setting where the cancellation is allowed, we consider an adaptive policy , which is defined as a decision treeto decide which item to pack into the knapsack next. A randomized policy is a probability distribution over adaptivepolicies. Performance of an adaptive policy is evaluated by the robustness ratio defined as follows. For any number C ∈ R + , let OPT C denote the optimal item set when the capacity is C , and let ALG C denote an output of the policy.Note that if the policy is a randomized one, then ALG C is a random variable. We call the adaptive policy α -robust ,for some α ≤
1, if for any C ∈ R + , the expected objective value of the policy’s output is within a ratio α of the optimalvalue, i.e., E [ f ( ALG C )]/ f ( OPT C ) ≥ α . We also call the ratio α the robustness ratio of the policy.One main purpose of this paper is to present algorithms that produce adaptive policies of constant robustness ratiosfor SMPUC. Moreover, we consider a special type of adaptive policy called a universal policy. A universal policyselects items following a precomputed order of items regardless of the observations made while packing. Thus, auniversal policy is identified with a sequence of the given items. This is in contrast to general adaptive policies, wherethe next item to try can vary with the observations made up to that point. We present an algorithm that produces arandomized universal policy that achieves a constant robustness ratio.If the cancellation is not allowed, then there is no difference between adaptive and universal policies because theselection terminates once a selected item does not fit the knapsack. In this case, we observe that no randomizedadaptive policy achieves a constant robustness ratio. Due to this hardness, we consider a stochastic knapsack capacitywhen the cancellation is not allowed. In this situation, we assume that the knapsack capacity is determined accordingto some probability distribution and the information of the distribution is available. Based on this assumption, wecompute a sequence of items as a solution. When the knapsack capacity is realized, the items in the prefix of thesequence are selected so that their total size does not exceed the realized capacity. The objective of the problem isto maximize the expected value of the submodular function f for the selected items. We address this problem as the submodular maximization problem with a stochastic knapsack capacity (SMPSC). We say that the approximation ratio of a sequence is α (≤ ) if its expected objective value is at least α times the maximum expected value of f for anyinstance. The sequence computed in an α -robust policy for SMPUC achieves an α -approximation ratio for SMPSC.However, the opposite does not hold, and an algorithm of a constant approximation ratio may exist for SMPSC eventhough no randomized adaptive policy achieves a constant robustness ratio for SMPUC. Indeed, we present such analgorithm. There are a huge number of studies on the submodular maximization problems (e.g., [18]), but we are aware of noprevious work on SMPUC or SMPSC. Regarding studies on the stochastic setting of the problem, several papersproposed concepts of submodularity for random set functions and discussed adaptive policies to maximize thosefunctions [2, 10]. There are also studies on the submodular maximization over an item set in which each item isactivated stochastically [1, 9, 12]. However, as far as we know, there is no study on the problem with stochasticconstraints.When the objective function is modular (i.e., the function returns the sum of the values associated with the selecteditems), the submodular maximization problem with a knapsack constraint is equivalent to the classic knapsack problem.For the knapsack problem, there are numerous studies on the stochastic sizes and rewards of items [5, 11, 15]. Thisproblem is called the stochastic knapsack problem. Note that this is different from the knapsack problem with astochastic capacity (KPSC), and there is no direct relationship between them. However, we observe that most of thealgorithms for the stochastic knapsack problem can be applied to KPSC. Indeed, one of our algorithms for SMPSC isbased on the idea of Gupta et al. [11] for the stochastic knapsack problem.The covering version of KPSC is studied in a context of single machine scheduling with nonuniform processingspeed. This problem is known to be strongly NP-hard [13], which implies that pseudo-polynomial time algorithmsare unlikely to exist. This is in contrast to the fact that the classic knapsack problem and its covering version admitpseudo-polynomial time algorithms. Megow and Verschae [17] gave a PTAS for the covering version of KPSC.To the best of our knowledge, KPSC itself has not been studied well. The only previous study we are aware of isthe thesis of Dabney [4], wherein a PTAS is presented for the problem. Since the knapsack problem and its covering2able 1: Summary of main results in this paperwith cancellation without cancellationunknown rand. adaptive ( − / e )/ ( − / √ e )/ ( − / e )/ ( / − o ( )) -approx.capacity rand. (( − / √ e )/ − ǫ ) -approx.version are equivalent in the existence of exact algorithms, the strongly NP-hardness of the covering version impliesthe same hardness for KPSC.Regarding the knapsack problem with an unknown capacity (KPUC), Megow and Mestre [16] mentioned that nodeterministic policy achieves a constant robustness ratio when cancellation is not allowed. They presented an algorithmthat constructs for each instance a policy whose robustness ratio is arbitrarily close to the one of an optimal policythat achieves the largest robustness ratio for the instance. When cancellation is allowed, Disser et al. [6] provided adeterministic 1 / /
2, which means that the robustness ratio of their deterministic universal policy is bestpossible even for any deterministic adaptive policy.
Contributions in this paper are summarized in Table 1. For the case where cancellation is allowed, we present threepolynomial-time algorithms for SMPUC. These algorithms produce• a randomized adaptive policy of robustness ratio ( − / e )/ ( − / e )/
21 (Section 3.2);• a randomized universal policy of robustness ratio ( − / √ e )/ ( − / e )/ / / swap item if it corresponds to a single-item solution for someknapsack capacity. A key idea in their policy is to pack swap items earlier than the others. However, their techniquefully relies on the property that the objective function is modular, and their choice of swap items is not suitablefor SMPUC. In the present paper, we introduce a new notion of single-valuable items . This enables us to design adeterministic 2 ( − / e )/ / / / /
9. In addition, we show that no deterministic policy achievesrobustness ratio better than ( + √ )/ ( + √ )/ / − o ( ) (Section 5.1);• a polynomial-time randomized algorithm of approximation ratio ( − / √ e )/ − ǫ for any small constant ǫ > ( − / √ e )/ − o ( ) for SMPSC.We then transform it into a polynomial-time algorithm. This transformation requires a careful sketching of knapsackcapacity, which was not necessary for the stochastic knapsack problem. The rest of this paper is organized as follows. Section 2 gives notations and preliminary facts used in this paper.Section 3 presents the adaptive policies for SMPUC with cancellation. Section 4 provides the upper-bounds onthe robustness ratios. Section 5 presents the approximation algorithms for SMPSC without cancellation. Section 6concludes the paper.
In this section, we define terminologies used in this paper, and introduce existing results which we will use.
Maximization of monotone submodular functions:
The inputs of the problem are a set I of n items and anonnegative set function f : 2 I → R + . In this paper, we assume that (i) f satisfies f (∅) =
0, (ii) f is submodular (i.e., f ( X ) + f ( Y ) ≥ f ( X ∪ Y ) + f ( X ∩ Y ) for any X , Y ⊆ I ), and (iii) f is monotone (i.e., f ( X ) ≤ f ( Y ) for any X , Y ⊆ I with X ⊆ Y ). The function f is given as an oracle that returns the value of f ( X ) for any query X ⊆ I . Let I ⊆ I beany family such that X ⊆ Y ∈ I implies X ∈ I . The I -constrained submodular maximization problem seeks X ∈ I that maximizes f ( X ) .We focus on the case where I corresponds to a knapsack constraint. Namely, each item i ∈ I is associated with asize s ( i ) , and I is defined as { X ⊆ I : Í i ∈ X s ( i ) ≤ C } for some knapsack capacity C >
0. We assume that the itemsize s ( i ) ( i ∈ I ) and the knapsack capacity C are positive integers. We denote Í i ∈ X s ( i ) by s ( X ) for any X ⊆ I . Problem SMPUC:
In SMPUC, the knapsack capacity C is unknown. We can see whether an item set fits the knapsackonly when an item is added to the item set.A solution for SMPUC is an adaptive policy P , which is represented as a binary decision tree that contains everyitem at most once along each path from the root to a leaf. Each node of the decision tree is an item to try packinginto the knapsack. A randomized policy is a probability distribution over binary decision trees. One of the decisiontrees is selected according to the probability distribution. For a fixed capacity C , the output of a policy P is an itemset denoted by P( C ) ⊆ I obtained as follows. We start with P( C ) = ∅ and check whether the item r at the root of P fits the knapsack, i.e., whether s ( r ) + s (P( C )) ≤ C . If the item fits, then we add r to P( C ) and continue packingrecursively with the left subtree of r . Otherwise, we have two options: when cancellation is allowed, we discard r andcontinue packing recursively with the right subtree of r ; when cancellation is not allowed, we discard r and output P( C ) to terminate the process.When a policy does not depend on the observation made while packing, we call such a policy universal . Sinceevery path from the root to a leaf in a universal policy is identical, we can identify a universal policy with a sequence Π = ( Π , . . . , Π n ) of items in I . For a fixed capacity C , the output of a universal policy, denoted by Π ( C ) , is constructedas follows. We start with Π ( C ) = ∅ , and add items to π ( C ) in n iterations. In the i th iteration, we check whether4 ( Π ( C )) + s ( Π i ) ≤ C holds or not. If true, then Π i is added to X . Otherwise, Π i is discarded, and we proceed to thenext iteration when cancellation is allowed, and we terminate the process when cancellation is not allowed. Problem SMPSC:
In SMPSC, the knapsack capacity C is given according to some probability distribution. Let T = Í i ∈ I s ( i ) . For each t ∈ [ T ] : = { , , . . . , T } , we denote by p ( t ) the probability that the knapsack capacity is t . Weassume that the probability is given to an algorithm through an oracle that returns the value of Í Tt ′ = t p ( t ′ ) for any query t ∈ [ T ] . Hence, the input size of a problem instance is Θ ( n log T ) and an algorithm runs in pseudo-polynomial time ifits running time depends on T linearly. A solution for SMPSC is a universal policy, i.e., a sequence Π = ( Π , . . . , Π n ) of the items in I . When a capacity C is decided, the output Π ( C ) of Π is constructed in the same way as universalpolicies for SMPUC. The objective of SMPSC is to find a sequence Π that maximizes E C [ f ( Π ( C ))] . Multilinear extension, continuous greedy, and contention resolution scheme:
From any vector x ∈ [ , ] I , wedefine a random subset R x of I so that each i ∈ I is included in R x with probability x i , where the inclusion of i isindependent from the inclusion of the other items. For a submodular function f : 2 I → R + , its multilinear extension F : [ , ] I → R + is defined by F ( x ) = E [ f ( R x )] = Í X ⊆ I f ( X ) Î i ∈ X x i Î i ′ ∈ I \ X ( − x i ′ ) for all x ∈ [ , ] I . This function F satisfies the smooth monotone submodularity , that is, ∂ F ( x )/ ∂ x i ≥ i ∈ I and ∂ F ( x )/( ∂ x i ∂ x j ) ≤ i , j ∈ I . Although it is hard to compute the value of F ( x ) exactly, it usually can be approximated with arbitraryprecision by a Monte-Carlo simulation. In this paper, to make discussion simple, we assume that F can be computedexactly.A popular approach for solving the I -constrained submodular maximization problem is to use a continuousrelaxation of the problem. Let P ⊆ [ , ] I be a polytope in which each integer vector in P is the incidence vectorof a member of I . Then, max x ∈ P F ( x ) ≥ max X ∈I f ( X ) holds. In this approach, it is usually assumed that P is downward-closed (i.e., if x , y ∈ [ , ] I satisfies y ≤ x ∈ P , then y ∈ P ), and solvable (i.e., the maximization problemmax x ∈ P Í i ∈ I w i x i can be solved in polynomial time for any w ∈ R I + ).Calinescu et al. [3] gave an algorithm called continuous greedy for a continuous maximization problem max x ∈ P F ( x ) over a solvable downward-closed polytope P . They proved that the continuous greedy outputs a vector x ∈ P suchthat F ( x ) ≥ ( − / e − o ( )) max x ′ ∈ P F ( x ′ ) . Feldman [7] extended its analysis by observing that the continuousgreedy algorithm with stopping time b ≥ x ∈ [ , ] I such that x / b ∈ P and F ( x ) ≥ ( − e − b − o ( )) max X ∈I f ( X ) (The performance guarantee depending on the stopping time is originally given for the measuredcontinuous greedy algorithm proposed by [8]). It is easy to see that his analysis can be modified to prove a slightlystronger result F ( x ) ≥ ( − e − b − o ( )) max x ′ ∈ P F ( x ′ ) . In addition, this analysis requires only the smooth monotonesubmodularity as a property of F .A fractional vector x ∈ P can be rounded into an integer vector by a contention resolution scheme. Let b , c ∈ [ , ] .For a vector x , we denote supp ( x ) = { i ∈ I : x i > } . We consider an algorithm that receives x ∈ { z : z / b ∈ P } and A ⊆ I as inputs and returns a random subset π x ( A ) ⊆ A ∩ supp ( x ) . Such an algorithm is called ( b , c ) -balancedcontention resolution scheme if π x ( A ) ∈ I with probability 1 for all x and A , and Pr [ i ∈ π x ( R x ) | i ∈ R x ] ≥ c holdsfor all x and i ∈ supp ( x ) (recall that R x is the random subset of I determined from x ). It is also called monotone ifPr [ i ∈ π x ( A )] ≥ Pr [ i ∈ π x ( A ′ )] for any i ∈ A ⊆ A ′ ⊆ I . If a monotone ( b , c ) -balanced contention resolution scheme isavailable, then we can achieve the approximation ratio claimed in the following theorem by applying it to a fractionalvector computed by the measured continuous greedy algorithm with stopping time b . This fact is summarized as in thefollowing theorem. Theorem 1 ([8]) . If there exists a monotone ( b , c ) -balanced contention resolution scheme for I , then the I -constrainedsubmodular maximization problem admits an approximation algorithm of ratio ( − e − b ) c − o ( ) for any nonnegativemonotone submodular function. SMPUC with cancellation ( − / e )/ -robust policy In this subsection, we present a randomized adaptive policy for SMPUC in the situation that the cancellation is allowed.The idea of our algorithm is based on a simple greedy algorithm [14] for the submodular maximization problemwith a knapsack constraint. The greedy algorithm generates two candidate item sets. One set is obtained greedilyby repeatedly inserting an item maximizing the increase in the objective function value per unit of size. The other is5 lgorithm 1:
Greedy algorithm P for ( I , U ) X ← U , R ← I \ U ; foreach j = , . . . , | I \ U | do let i j ∈ arg max {( f ( X ∪ { i }) − f ( X ))/ s ( i ) : i ∈ R } ; if i j fits the knapsack (i.e., s ( X ) + s ( i j ) ≤ C ) then // left subtree X ← X ∪ { i j } else // right subtree discard i j R ← R \ { i j } ; return X ; Algorithm 2:
Randomized ( − / e )/ flip a coin; if head then execute Algorithm 1 for ( I , ∅) ; // policy P else // policy P X ← ∅ , R ← I ; foreach j = , . . . , | I | do let i j ∈ arg max { f ( X ∪ { i }) − f ( X ) : i ∈ R } ; if i j fits the knapsack (i.e., s ( X ) + s ( i j ) ≤ C ) then // left subtree X ← X ∪ { i j } else // right subtree discard i j R ← R \ { i j } ; return X ;obtained similarly by packing an item maximizing the increase in the objective function value. Then, the algorithmreturns the set with the larger value, which leads to a ( − / e )/ P and P that are analogous to the abovetwo greedy methods. One policy P corresponds to the greedy algorithm based on the increase in the objective functionvalue per unit of size. We formally present this policy as Algorithm 1. The item i j corresponds to a node of depth j − P . We remark that Algorithm 1 chooses i j independently of the knapsack capacity, but the choice dependson the observations which items fit the knapsack and which items did not so far. For generality of the algorithm, weassume that the algorithm receives an initial state U of the knapsack, which is defined as a subset of I (this will be usedin Section 3.2).The policy P tries to pack items greedily based on the increase in the objective function value. Our algorithm issummarized in Algorithm 2. We remark that Algorithm 2 chooses the item i j for each iteration j in polynomial timewith respect to the cardinality of I .We analyze the robustness ratio of Algorithm 2. In the execution of Algorithm 1 for ( I , U ) under some capacity C , we call the order ( i , . . . , i | I \ U | ) of items in I \ U the greedy order for ( I , U ) with capacity C , where i j is the j thselected item at line 3. A key concept in the analysis of Algorithm 2 is to focus on the first item in the greedy orderthat is a member of OPT C but is spilled from P ( C ) . The following lemma is useful in analysis of Algorithm 2 andalso algorithms given in subsequent sections. Lemma 1.
Let C , C ′ be any positive numbers, and let q be the smallest index such that i q ∈ OPT C ′ and i q < P ( C ) (let q = ∞ if there is no such index). When Algorithm 1 is executed for ( I , U ) with capacity C , it holds for any index j that f (((P ( C ) ∪ OPT C ′ ) ∩ { i , . . . , i j }) ∪ U ) ≥ − exp − s ((P ( C ) ∪ OPT C ′ ) ∩ { i , . . . , i j }) C ′ !! · f ( OPT C ′ ) . oreover, (P ( C ) ∪ OPT C ′ ) ∩ { i , . . . , i j } = P ( C ) ∩ { i , . . . , i j } holds for any j < q . To prove Lemma 1, we show the following two lemmas.
Lemma 2.
For any X ⊆ Y ⊆ I , we have f ( Y ) ≤ f ( X ) + Õ i ∈ Y \ X ( f ( X ∪ { i }) − f ( X )) . Proof.
Suppose that Y \ X = { a , . . . , a l } , where l = | Y \ X | . Let X j = X ∪ { a , . . . , a j } for each j = , . . . , l . Note that X = X and X l = Y . Since f is submodular, we have f ( X j ) − f ( X j − ) ≤ f ( X ∪ { a j }) − f ( X ) . By summing both sidesover j , we obtain f ( Y ) − f ( X ) = Í lj = ( f ( X j ) − f ( X j − )) ≤ Í lj = ( f ( X ∪ { a j }) − f ( X )) , which proves the lemma. (cid:3) Lemma 3.
Let C be any positive number. For any X ⊆ Y ⊆ I and any i ∈ I \ Y such that OPT C \ X ⊆ I \ Y and f ( X ∪ { i }) − f ( X ) s ( i ) = max (cid:26) f ( X ∪ { v }) − f ( X ) s ( v ) : v ∈ I \ Y (cid:27) , it holds that f ( X ∪ { i }) − f ( X ) ≥ s ( i ) C ( f ( OPT C ) − f ( X )) . Proof.
By applying Lemma 2 with Y = OPT C ∪ X (⊇ X ) , we have f ( OPT C ∪ X ) ≤ f ( X ) + Í v ∈ OPT C \ X ( f ( X ∪ { v }) − f ( X )) . In addition, f ( OPT C ) ≤ f ( OPT C ∪ X ) holds by monotonicity. Thus, we have f ( OPT C ) − f ( X ) ≤ Õ v ∈ OPT C \ X ( f ( X ∪ { v }) − f ( X )) = Õ v ∈ OPT C \ X s ( v ) · f ( X ∪ { v }) − f ( X ) s ( v )≤ © « Õ v ∈ OPT C \ X s ( v ) ª®¬ · f ( X ∪ { i }) − f ( X ) s ( i ) ≤ C · f ( X ∪ { i }) − f ( X ) s ( i ) . (cid:3) Proof of Lemma 1.
We denote Y j = U ∪ { i , . . . , i j } and Z j = (P ( C ) ∪ OPT C ′ ) ∩ { i , . . . , i j } for each j . Let Y = U and Z = ∅ . We first prove by induction on j that f ( Z j ∪ U ) ≥ ( − exp (− s ( Z j )/ C ′ )) · f ( OPT C ′ ) for any j . For j = − exp (− s ( Z )/ C ′ ) = f ( Z ∪ U ) ≥ f (∅) = f ( Z j − ∪ U ) ≥ ( − exp (− s ( Z j − )/ C ′ )) · f ( OPT C ′ ) holds for some j ≥
0. If i j < P ( C ) ∪ OPT C ′ , thenwe have Z j = Z j − , and hence it is easy to see that f ( Z j ∪ U ) ≥ ( − exp (− s ( Z j )/ C ′ )) · f ( OPT C ′ ) . In the following,we may assume that i j ∈ P ( C ) ∪ OPT C ′ . Note that Z j − ∪ { i j } = Z j . We observe that by choice of i j in Algorithm 1,it holds that f (( Z j − ∪ U ) ∪ { i j }) − f ( Z j − ∪ U ) s ( i j ) = max (cid:26) f (( Z j − ∪ U ) ∪ { v }) − f ( Z j − ∪ U ) s ( v ) : v ∈ I \ Y j − (cid:27) . Because OPT C ′ \ ( Z j − ∪ U ) ⊆ I \ Y j − , we can apply Lemma 3 with X = Z j − ∪ U , Y = Y j − and i = i j to derive f ( Z j ∪ U ) − f ( Z j − ∪ U ) ≥ s ( i j ) C ′ ( f ( OPT C ′ ) − f ( Z j − ∪ U )) . Therefore, by using this inequality, we see that f ( Z j ∪ U ) ≥ f ( Z j − ∪ U ) + s ( i j ) C ′ ( f ( OPT C ′ ) − f ( Z j − ∪ U )) = (cid:18) − s ( i j ) C ′ (cid:19) f ( Z j − ∪ U ) + s ( i j ) C ′ f ( OPT C ′ )≥ (cid:18) − s ( i j ) C ′ (cid:19) · (cid:18) − exp (cid:18) − s ( Z j − ) C ′ (cid:19) (cid:19) · f ( OPT C ′ ) + s ( i j ) C ′ f ( OPT C ′ ) = (cid:18) − (cid:18) − s ( i j ) C ′ (cid:19) · exp (cid:18) − s ( Z j − ) C ′ (cid:19) (cid:19) · f ( OPT C ′ )≥ (cid:18) − exp (cid:18) − s ( i j ) C ′ (cid:19) · exp (cid:18) − s ( Z j − ) C ′ (cid:19)(cid:19) · f ( OPT C ′ ) = (cid:18) − exp (cid:18) − s ( Z j ) C ′ (cid:19) (cid:19) · f ( OPT C ′ ) , − x ≤ exp (− x ) for any x .It remains to show that Z j = P ( C ) ∩ { i , . . . , i j } for any j < q . The inclusion ⊇ is clear. For any j ≤ q , we haveOPT C ′ ∩ { i , . . . , i j } ⊆ P ( C ) ∩ { i , . . . , i j } by choice of q . Thus, Z j = ( OPT C ′ ∩ { i , . . . , i j }) ∪ (P ( C ) ∩ { i , . . . , i j }) ⊆P ( C ) ∩ { i , . . . , i j } . This completes the proof. (cid:3) We are ready to analyze the robustness ratio of Algorithm 2. For any C ∈ R + , we denote I C = { i : s ( i ) ≤ C } anddenote by i C an item with the largest value in this set, i.e., i C ∈ arg max { f ({ i }) : i ∈ I C } . Theorem 2.
Algorithm 2 is a randomized ( − / e )/ > . -robust adaptive policy.Proof. Let P (respectively, P ) be the adaptive policy in Algorithm 2 when the coin comes up head (respectively,tail). Suppose that the given capacity is C . The expected value of the output by Algorithm 2 is ALG C = ( f (P ( C )) + f (P ( C )))/
2. If OPT C ⊆ P ( C ) , then we get ALG C ≥ f (P ( C ))/ ≥ f ( OPT C )/
2. Assume that OPT C * P ( C ) .Let ( i , i , . . . , i n ) be the greedy order for ( I , ∅) with capacity C . Let q be the smallest index such that i q ∈ OPT C and i q < P ( C ) . We have f (P ( C )) ≥ f ({ i C }) ≥ f ({ i q }) because i C ∈ P ( C ) and i q ∈ I C . By the monotonicity andsubmodularity of f , we haveALG C ≥ f (P ( C ) ∩ { i , . . . , i q − }) + f ({ i q }) ≥ f ((P ( C ) ∩ { i , . . . , i q − }) ∪ { i q }) . Note that by the choice of q , we have (P ( C ) ∩ { i , . . . , i q − }) ∪ { i q } = (P ( C ) ∪ OPT C ) ∩ { i , . . . , i q } and s ((P ( C ) ∩{ i , . . . , i q − }) ∪ { i q }) > C . Thus, Lemma 1 implies that f ((P ( C ) ∪ OPT C ) ∩ { i , . . . , i q }) ≥ ( − / e ) f ( OPT C ) , andhence we have ALG C ≥ ( − / e ) f ( OPT C )/ (cid:3) ( − / e )/ -robust policy In this subsection, we present a deterministic adaptive policy for SMPUC by modifying Algorithm 2. To this end, letus review the result of Disser et al. [6] for KPUC, which is identical to SMPUC with modular objective functions.They obtained a deterministic 1 / ( I , ∅) with the givencapacity. Their policy first tries to insert items with large values, which are called swap items . For a greedy order ( i , . . . , i n ) , an item i j is called a swap item if f ({ i j }) ≥ f (P ( C )) for some capacity C such that i j is the first item thatoverflows the knapsack when the items are packed in the greedy order. The key property is that the greedy order doesnot depend on the capacity C when f is modular. This enables them to determine swap items from the unique greedyorder, and to obtain a deterministic universal policy.On the other hand, it is hard to apply their idea to our purpose. The difficulty is that the greedy order variesaccording to the capacity when f is submodular. Thus, the notion of swap items is not suitable in SMPUC for choosingthe items that should be tried first. In this paper, we introduce single-valuable items , which are items i satisfying f ({ i }) ≥ · f ( OPT s ( i )/ ) ( = · max { f ( X ) : s ( X ) ≤ s ( i )/ }) . In the design of our algorithm, we use a polynomial-time γ -approximation algorithm that computes f ( OPT s ( i )/ ) ; for example, γ = − / e [18]. Our algorithm calculates theset S of the single-valuable items in a sense of γ -approximation. To be precise, it holds that f ({ i }) ≥ γ · f ( OPT s ( i )/ ) for any item i ∈ S and f ({ i }) ≤ · f ( OPT s ( i )/ ) for any item i < S .Our algorithm first tries to insert items in S until one of these items fits the knapsack (or all the items havebeen canceled) and then it executes Algorithm 1 with the remaining items. We remark that, unlike the algorithm forKPUC [6], our algorithm executes Algorithm 1 once a single-valuable item fits the knapsack. We summarize ouralgorithm in Algorithm 3.Note that our algorithm constructs S and decides which item to try in polynomial time. In the rest of this subsection,we let R , U , and S denote the item sets at the beginning of line 10. Then U is empty if S ∩ I C = ∅ , and U consists ofexactly one item i ∗ ∈ arg max { f ({ i }) : i ∈ I C ∩ S } otherwise. The following theorem is the main result of this section. Theorem 3.
Algorithm 3 using a γ -approximation algorithm in line 3 is a min { γ / , ( − / √ e )/ } -robust universalpolicy. In particular, it is ( − / e )/ > . -robust when γ = − / e , and it is ( − / √ e )/ > . -robustwhen γ = . We describe the proof idea. We discuss the behavior of Algorithm 3 when the given capacity is C . Since the outputof Algorithm 3 is P ( C ) for ( R ∪ U , U ) , one can think of a similar proof to the one of Theorem 2. However, we maynot be able to use the value f ({ i q }) in the evaluation of f (P ( C )) here because P ( C ) may not contain any items that8 lgorithm 3: Deterministic 2 γ / U ← ∅ , R ← I , S ← ∅ ; foreach i ∈ I do Let L be a γ -approximate solution to max { f ( X ) : s ( X ) ≤ s ( i )/ , X ⊆ I } ; if f ({ i }) ≥ f ( L ) then S ← S ∪ { i } ; while U = ∅ and S ∩ R , ∅ do let i ∈ arg max { f ({ i }) : i ∈ S ∩ R } ; if i fits the knapsack (i.e., s ( i ) ≤ C ) then U ← { i } ; // left subtree else discard i ; // right subtree R ← R \ { i } ; execute Algorithm 1 for ( R ∪ U , U ) ;bound f ({ i q }) . We show the theorem using a different approach. A basic idea is to divide OPT C into several subsets A , . . . , A k and derive a bound f ( OPT C ) ≤ f ( A ) + · · · + f ( A k ) by the submodularity of f . A key idea is to evaluateeach f ( A i ) using properties of single-valuable items, which are shown as the following two lemmas. Lemma 4.
It holds that f ({ i }) ≤ max { f ( U ) , f ( OPT s ( i )/ )} for any item i ∈ I C .Proof. The lemma follows because f ({ i }) ≤ f ( U ) if i ∈ S and f ({ i }) ≤ f ( OPT s ( i )/ ) if i < S . (cid:3) Lemma 5.
Let s ∗ = s ( U ) . If s ∗ ≤ C / , then, for any number x ∈ [ s ∗ , C / ] , it holds that f ( OPT x ) ≤ f ( OPT x ) .Proof. First, suppose that OPT x contains some item i with s ( i ) > x and i ∈ S . Then we have f ( OPT x ) ≤ f ({ i }) + f ( OPT x \ { i }) by the submodularity of f . Since the existence of item i implies I C ∩ S , ∅ , U consistsof exactly one item from S . We denote this item by i ∗ . Any item i ′ ∈ S with f ({ i ′ }) > f ({ i ∗ }) does not fit theknapsack with capacity C ≥ x . This implies that f ({ i }) ≤ f ({ i ∗ }) . Moreover, we have f ({ i ∗ }) ≤ f ( OPT x ) because s ( i ∗ ) = s ∗ ≤ x . Hence, f ({ i }) ≤ f ( OPT x ) follows. On the other hand, f ( OPT x \ { i }) ≤ f ( OPT x − s ( i ) ) ≤ f ( OPT x ) holds, where the last inequality follows from s ( i ) > x . Therefore, we see that f ( OPT x ) ≤ f ({ i }) + f ( OPT x \ { i }) ≤ f ( OPT x ) . Second, suppose that OPT x contains some item i with s ( i ) > x and i < S . Recall that i < S implies f ({ i }) ≤ f ( OPT s ( i )/ ) . Since s ( i )/ ≤ x < s ( i ) , we see that f ( OPT x ) ≤ f ({ i }) + f ( OPT x \ { i }) ≤ f ( OPT s ( i )/ ) + f ( OPT x − s ( i ) ) ≤ f ( OPT x ) . Finally, assume that all items in OPT x have size at most x . We can divide OPT x into three sets A , A , A with s ( A j ) ≤ x ( j = , , ) . Therefore, we have f ( OPT x ) ≤ f ( A ) + f ( A ) + f ( A ) ≤ f ( OPT x ) . The lemma follows from the arguments on the three cases. (cid:3)
Proof of Theorem 3.
Let P be the deterministic policy described as Algorithm 3. Suppose that the given capacity is C . We remark that P( C ) = P ( C ) for ( R ∪ U , U ) . We may assume that OPT C * P( C ) since otherwise f (P( C )) = f ( OPT C ) . Let s ∗ = s ( U ) . We branch the analysis into two cases: (a) s ∗ < C / s ∗ ≥ C / Case (a):
We claim that f (P( C )) ≥ ( − / √ e ) · f ( OPT C )/ . Since s ∗ < C / < C /
2, Lemma 5 indicates that f ( OPT C / ) ≥ f ( OPT C )/
3. We evaluate f ( OPT C / ) by using Lemma1 with C ′ = C /
2. We may assume that OPT C / * P( C ) ; otherwise f (P( C )) ≥ f ( OPT C / ) holds, which implies that f (P( C )) ≥ f ( OPT C )/
3. Let ( i , . . . , i | R | ) be the greedy order for ( R ∪ U , U ) with capacity C . Let q ′ be the smallest9ndex such that i q ′ ∈ OPT C / and i q ′ < P ( C ) . We denote Z = P ( C ) ∩ { i , . . . , i q ′ − } . As s ( i q ′ ) ≤ C /
2, we see that s ( Z ) > C − s ∗ − s ( i q ′ ) ≥ C /
6. Then Lemma 1 implies that f (P( C )) ≥ f ( Z ∪ U ) ≥ ( − / √ e ) · f ( OPT C / ) , and hence the claim follows. Case (b):
In this case, we prove the following two claims. Note that U is nonempty since s ∗ >
0. Let i ∗ denote theunique item in U . Claim 1. f ( OPT C ) ≤ f ( OPT s ∗ ) .Proof. Let T ′ = { i ∈ OPT C : s ( i ) > s ∗ } . Since C ≤ s ∗ , we observe that T ′ has at most two items. We evaluate f ( OPT C ) by splitting OPT C depending on T ′ .Suppose that T ′ = { i } and 2 s ∗ ≤ s ( i ) (≤ C ) . It follows that f ( OPT C ) ≤ f ({ i }) + f ( OPT C \ { i }) . By Lemma4, we have f ({ i }) ≤ max { f ( U ) , f ( OPT s ( i )/ )} ≤ max { f ( OPT s ∗ ) , f ( OPT s ( i )/ )} . Since s ( i )/ ≤ C / < s ∗ , itfollows that max { f ( OPT s ∗ ) , f ( OPT s ( i )/ )} ≤ f ( OPT s ∗ ) , which is at most 6 f ( OPT s ∗ ) by Lemma 5 together with s ∗ ≤ s ( i )/ ≤ C /
2. Moreover, we have f ( OPT C \ { i }) ≤ f ( OPT C − s ( i ) ) ≤ f ( OPT s ∗ ) since C − s ( i ) ≤ s ∗ − s ∗ = s ∗ .Thus, we obtain f ( OPT C ) ≤ f ({ i }) + f ( OPT C \ { i }) ≤ f ( OPT s ∗ ) + f ( OPT s ∗ ) = f ( OPT s ∗ ) . Assume that T ′ = { i } and s ( i ) < s ∗ . We observe that f ({ i }) ≤ f ( OPT s ∗ ) by Lemma 4 and s ( i )/ < s ∗ . Notethat all items in OPT C \ { i } have size at most s ∗ and their total size is at most 2 s ∗ . Thus we can divide OPT C \ { i } intothree sets A , A , A with s ( A j ) ≤ s ∗ ( j = , , ) . Hence, we have f ( OPT C ) ≤ f ({ i }) + f ( OPT C \ { i }) ≤ f ( OPT s ∗ ) + f ( OPT s ∗ ) = f ( OPT s ∗ ) . If T ′ = { i , i ′ } , then s ( i ) , s ( i ′ ) < s ∗ and C − s ( i ) − s ( i ′ ) < s ∗ . In this case, by Lemma 4 and s ( i )/ , s ( i ′ )/ < s ∗ , wehave f ({ i }) , f ({ i ′ }) ≤ f ( OPT s ∗ ) . This implies that f ( OPT C ) ≤ f ({ i }) + f ({ i ′ }) + f ( OPT C \ { i , i ′ }) ≤ ( + + ) f ( OPT s ∗ ) = f ( OPT s ∗ ) . Finally, if T ′ = ∅ , i.e., s ( i ) ≤ s ∗ for all i ∈ OPT C , then we can divide OPT C into five sets A , . . . , A with s ( A j ) ≤ s ∗ ( ∀ j ) , and hence we have f ( OPT C ) ≤ f ( A ) + · · · + f ( A ) ≤ f ( OPT s ∗ ) . Therefore, the claim holds. (cid:3) Claim 2. f ( OPT s ∗ ) ≤ γ · f ({ i ∗ }) .Proof. Recall that f ({ i ∗ }) ≥ γ · f ( OPT s ∗ / ) . Take an arbitrary item i with the largest size in OPT s ∗ .Assume that s ( i ) > s ∗ / i ∈ S . We have f ({ i }) ≤ max { f ({ i ′ }) : i ′ ∈ I C ∩ S } = f ({ i ∗ }) since i ∈ S and s ( i ) ≤ s ∗ ≤ C . Moreover, f ( OPT s ∗ \ { i }) ≤ f ( OPT s ∗ − s ( i ) ) ≤ f ( OPT s ∗ / ) follows from s ( i ) > s ∗ /
2. Hence, we have f ( OPT s ∗ ) ≤ f ({ i }) + f ( OPT s ∗ \ { i }) ≤ f ({ i ∗ }) + f ( OPT s ∗ / ) ≤ (cid:16) + γ (cid:17) f ({ i ∗ }) .Suppose that s ( i ) > s ∗ / i < S . Then, f ({ i }) ≤ f ( OPT s ( i )/ ) ≤ f ( OPT s ∗ / ) holds by i < S and s ( i ) ≤ s ∗ ,and f ( OPT s ∗ \ { i }) ≤ f ( OPT s ∗ / ) holds by s ( i ) > s ∗ /
2. These imply that f ( OPT s ∗ ) ≤ f ({ i }) + f ( OPT s ∗ \ { i }) ≤ f ( OPT s ∗ / ) + f ( OPT s ∗ / ) ≤ γ f ({ i ∗ }) .If s ( i ) ≤ s ∗ /
2, then we can divide OPT s ∗ \ { i } into two sets A , A such that s ( A j ) ≤ s ∗ / ( j = , ) , and hence itfollows that f ( OPT s ∗ ) ≤ f ( OPT s ( A ) ) + f ( OPT s ( A ) ) + f ( OPT s ( i ) ) ≤ f ( OPT s ∗ / ) ≤ γ · f ({ i ∗ }) . Therefore, the claimfollows since 3 /( γ ) ≥ + /( γ ) . (cid:3) Combining these claims give f (P( C )) ≥ ( γ / ) · f ( OPT C ) . Hence the theorem is proven. (cid:3) lgorithm 4: Randomized ( − / √ e )/ Π ← () ; flip a coin; if head then l ← s min ← min i ∈ I s ( i ) ; for k ← to ⌈ log ( Í i ∈ I s ( i )/ s min )⌉ do let Y ( k ) be the output P ( k · s min ) of the policy given in Algorithm 1 for ( I , ∅) ; foreach i ∈ Y ( k ) \ Ð k − j = Y ( j ) do Π l ← i , l ← l + else let Π be the decreasing order of items i ∈ I in value f ({ i }) ; return Π ; ( − / √ e )/ -robust universal policy In this subsection, we devise a randomized ( − / √ e )/ X maximizing f ( X ) under a bound C ′ on the total size s ( X ) , and then appends items of X to the sequence. The bound C ′ is set by the algorithm according to the input items. To compute the set X , we useAlgorithm 1 in which the capacity is set to be C ′ . We double the bound after each iteration.Also, we cannot use the other adaptive policy P in Algorithm 2. We remark that the proof of Theorem 2 usesonly the fact that f (P ( C )) contains i C regarding to the policy P . This implies that in the worst case analysis, thereis no difference between P and a universal policy that inserts items based on the decreasing order according to thevalues of f . Thus, we replace P with the universal policy. Our algorithm is summarized in Algorithm 4.We remark that Algorithm 4 constructs a sequence of items in polynomial time with respect to the input size. Wecan prove the following result by using Lemma 1. Theorem 4.
Algorithm 4 is a ( − / √ e )/ > . -robust randomized universal policy.Proof. Let Π (respectively, Π ) be the sequence of items in I returned by Algorithm 4 when the coin comes uphead (respectively, tail). Suppose that the given capacity is C . The expected value of the output of Algorithm 4 is f ( ALG C ) = ( f ( Π ( C )) + f ( Π ( C )))/
2. We assume that C ≥ s min since otherwise f ( OPT C ) = ALG C =
0. Recall that s min = min i ∈ I s ( i ) . Let k be the number satisfying 2 k − · s min ≤ C < k · s min . We let i C ∈ arg max { f ({ i }) : s ( i ) ≤ C } .When k = C < s min ), we have OPT C = { i C } because we can put only one item into the knapsack in this case.We also have Π ( C ) = { i C } . Hence, it holds that f ( OPT C ) = f ( Π ( C )) ≤ · ( f ( Π ( C )) + f ( Π ( C )))/ = · f ( ALG C ) .In what follows, we assume k ≥
2. We observe that the total size of items in Ð k − j = Y ( j ) is at most s (cid:16)Ð k − j = Y ( j ) (cid:17) ≤ Í k − j = s ( Y ( j ) ) ≤ Í k − j = j · s min ≤ k − · s min ≤ C . Thus, all items in Ð k − j = Y ( j ) , in particular those in Y ( k − ) , arecontained in Π ( C ) . We also observe that f ( Π ( C )) ≥ f ({ i C }) because i C ∈ Π ( C ) . Hence, it holds that 2 f ( ALG C ) = f ( Π ( C )) + f ( Π ( C )) ≥ f ( Y ( k − ) ) + f ({ i C }) .We denote C ′ = k − · s min . Recall that Y ( k − ) is the output of the greedy algorithm for ( I , ∅) when the capacity is C ′ . Let ( i , i , . . . , i n ) be the greedy order for ( I , ∅) with capacity C ′ . Let q be the smallest index such that i q ∈ OPT C and i q < Y ( k − ) . Since this definition implies i q ∈ I C , we have f ({ i C }) ≥ f ({ i q }) . By the monotonicity of f , we alsohave f ( Y ( k − ) ) ≥ f ( Y ( k − ) ∩ { i , . . . , i q − }) . Hence, f ( ALG C ) ≥ f ( Y ( k − ) ∩ { i , . . . , i q − }) + f ({ i q }) ≥ f (( Y ( k − ) ∩ { i , . . . , i q − }) ∪ { i q }) . We evaluate f (( Y ( k − ) ∩ { i , . . . , i q − }) ∪ { i q }) using Lemma 1. For notational convenience, we denote Y ′ = ( Y ( k − ) ∪ OPT C ) ∩ { i , . . . , i q } . In a similar way to the proof of Theorem 2, we can see that ( Y ( k − ) ∩ { i , . . . , i q − }) ∪ { i q } = Y ′ by the choice of q . Thus, we have s ( Y ′ ) > C ′ , and it follows that s ( Y ′ )/ C ≥ C ′ / C ≥ ( k − · s min )/( k · s min ) = / f ( ALG C ) ≥ f ( Y ′ )/ ≥ (cid:0) − / √ e (cid:1) f ( OPT C )/ (cid:3) Upper bounds on robustness ratios
KPUC with cancellation
In this subsection, we prove that no randomized policy achieves a robustness ratio better than 8 / / a , b , c whose sizes are 2 , ,
4, respectively, and whose weights are equal to theirown sizes. Recall that the objective value is defined as the sum of the weights of selected items. Let OPT C ′ be anoptimal solution when the capacity is C ′ ∈ R + . We provide an upper bound for this instance using Yao’s principle[19]. We consider an adversary that submits a probability distribution for the capacity as a mixed strategy. Let C bethe random variable which represents the capacity. Then, the robustness ratio for this instance is upper-bounded bymax P min C ′ ∈ R + f (P( C ′ )) f ( OPT C ′ ) ≤ max P E C (cid:20) f (P( C )) f ( OPT C ) (cid:21) , where the maximum is taken over all randomized policies P .We assume that the adversary submits C = / C = /
9. Then, we havemax P E C (cid:20) f (P( C )) f ( OPT C ) (cid:21) = max P (cid:18) · f (P( )) f ( OPT ) + · f (P( )) f ( OPT ) (cid:19) . Note that f ( OPT ) = f ( OPT ) =
5. Hence the right-hand side is equal to max P ( f (P( )) + f (P( )))/
9. Inwhat follows, we prove that this is at most 8 /
9. Note that the maximum is attained by a deterministic policy. If adeterministic policy P chooses item a first, then it can get objective value 2 when the capacity is 4. Thus, we have f (P( )) + f (P( )) ≤ + =
7. Similarly, the value is at most 3 + = b first and the value is at most4 + = c first. Since f (P( )) + f (P( )) ≤ max { , , } =
8, we obtain the following theorem.
Theorem 5.
Even if cancellation is allowed, no randomized policy has a robustness ratio better than / > . for KPUC . KPUC without cancellation
In this subsection, we prove that no randomized policy achieves a constant robustness ratio for KPUC in the settingwhere cancellation is not allowed.Let M be a positive integer of at least 2. We assume that the item set I consists of n items 1 , . . . , n , and the size andthe weight of item i is M i . The objective function f is defined by f ( S ) = Í i ∈ S M i for any S ⊆ I . We fix a policy, andshow that the robustness ratio of this policy is O ( / M ) for this instance if n = Ω ( M ) .Since there are n items, at least one item is chosen first by the policy with probability at most 1 / n . Let i denote suchan item. Let us consider the case where the capacity is M i . When the policy does not choose i first (this happens withprobability at least ( n − )/ n ), the largest objective value achieved by the solution is Í i − i ′ = M i ′ ≤ M i − . Therefore,the expected objective value of the policy is at most M i / n + ( n − )/ n · M i − . On the other hand, the optimal solutionfor this instance consists of only item i , which attains the objective value M i . The gap between these values is O ( / M ) if n = Ω ( M ) . Theorem 6.
If cancellation is not allowed, then no randomized policy achieves a constant robustness ratio for
KPUC . SMPUC
In this subsection, we consider a special case of SMPUC in which the size of each item is 1, i.e., the cardinality constraintcase. We show that no deterministic policy (even one with no computational restrictions) achieves a robustness ratiobetter than ( + √ )/ ( > . ) , and no randomized policy achieves a robustness ratio better than ( + √ )/ ( > . ) for this case. We remark that the cancellation is not useful in the cardinality constraint case. Also, for the cardinalityconstraint case of SMPUC, a greedy algorithm achieves ( − / e ) -robust (-approximation) and it is known to be thebest possible among policies that run in polynomial time.12o present the upper bound on the robustness ratio, let us construct an instance of the problem. Suppose that thereare three items a , b , c , each of whose size is 1, and the objective function f is given by f (∅) = , f ({ a }) = , f ({ b }) = f ({ c }) = + √ , f ({ a , b }) = f ({ a , c }) = + √ , f ({ b , c }) = f ({ a , b , c }) = + √ . Note that the function is monotone submodular and symmetric in b and c . If the policy first packs a , then therobustness ratio for capacity 1 is equal to f ({ a }/ f ({ a }) =
1, and the one for capacity 2 is f ({ a , b })/ f ({ b , c }) = ( + √ )/( + √ ) = ( + √ )/
4. Otherwise, i.e., the policy first packs b or c , then the robustness ratio for capacity 1is f ({ b }) f ({ a }) = ( + √ )/
4, and the one for capacity 2 is at least f ({ a , b })/ f ({ b , c }) = ( + √ )/
4. Thus, no deterministicpolicy achieves a robustness ratio better than ( + √ )/ ( + √ )/
8. Let us consider arandomized policy for the above instance that first inserts a with probability p . Then the robustness ratio for capacity1 is p · f ({ a }) + ( − p ) · f ({ b }) f ({ a }) = p + ( − p )( + √ ) = ( − √ ) p + ( + √ ) . Also, the robustness ratio for capacity 2 is at most p · f ({ a , b }) + ( − p ) · f ({ b , c }) f ({ b , c }) = ( + √ ) p + ( + √ )( − p )( + √ ) = − ( − √ ) p . Note that the former value is monotone increasing for p , and the latter is monotone decreasing for p . Thus, therobustness ratio of the policy is at mostmin ( ( − √ ) p + ( + √ ) , − ( − √ ) p ) ≤ + √ , where the inequality holds when p = / Theorem 7.
For
SMPUC with only unit-size items, no deterministic policy achieves a robustness ratio better than ( + √ )/ , and no randomized policy achieves a robustness ratio better than ( + √ )/ . SMPSC without cancellation ( / − o ( )) -approximation algorithm We present a pseudo-polynomial time ( / − o ( )) -approximation algorithm for SMPSC without cancellation. Wereduce the problem to the following problem. Submodular maximization problem with an interval independent constraint:
We are given a set I of items. Eachitem i ∈ I is associated with an interval l i on a line. We are also given a submodular function f : 2 I → R + . Theobjective is to find a subset I ′ of I maximizing f ( I ′ ) subject to the constraint that no two intervals associated withitems in I ′ intersect, i.e., l i ∩ l j = ∅ for all i , j ∈ I ′ .Feldman [7] showed that this problem admits a (1 / − o ( ) )-approximation randomized algorithm for monotonesubmodular functions.Let us explain the reduction from SMPSC to this problem. Let I be the set of items and f : 2 I → R + be thesubmodular function given in an instance of SMPSC. Recall that T = s ( I ) and [ T ′ ] = { , , . . . , T ′ } . For each i ∈ I , wemake T − s ( i ) + i , . . . , i T − s ( i ) , and i j is associated with the interval [ j , j + s i − ] for each j ∈ [ T − s i ] .Let I ′ denote the set of these items. We define a function f ′ : 2 I ′ → R + by f ′ ( U ′ ) = T Õ t = p ( t ) f ({ i ∈ I : ∃ j ∈ [ t − s i ] , i j ∈ U ′ }) for all U ′ ⊆ I ′ . It is not difficult to prove the following lemma.13 emma 6. The function f ′ is monotone and submodular. Let U ′ ⊆ I ′ be a solution for the instance of the problem with an interval independent constraint that consists ofthe item set I ′ (with associated intervals) and the submodular function f ′ . From U ′ , we define the ordering of I asfollows. If a copy of i ∈ I is included in U ′ , then pick item i at the time equal to the minimum index of its copies in U ′ ,i.e, min { t ∈ [ T ] : i t ∈ U ′ } . Sort the items in the increasing order of the times at which they are picked. The other itemsfollow these items, and their order is decided arbitrarily. This sequence achieves the objective value of at least f ′ ( U ′ ) . Theorem 8.
Problem
SMPSC without cancellation admits a pseudo-polynomial time randomized ( / − o ( )) -approximation algorithm for monotone submodular functions. Feldman [7] also gave a randomized 1 /( e + o ( )) -approximation algorithm for the problem with an intervalindependent constraint and nonmonotone submodular functions. Thus, if the submodular function is not monotone,then SMPSC admits a pseudo-polynomial time randomized 1 /( e + o ( )) -approximation algorithm. (( − / √ e )/ − ǫ ) -approximation algorithm In this subsection, we present a polynomial-time algorithm of approximation ratio ( − / √ e )/ − ǫ for any smallconstant ǫ >
0. This algorithm is based on the idea of Gupta et al. [11] for the stochastic knapsack problem. Wefirst give a pseudo-polynomial time (( − / √ e )/ − o ( )) -approximation algorithm, and then we transform it into apolynomial-time algorithm.Our algorithm relies on a continuous relaxation of the problem. The relaxation is formulated based on an ideaof using time-indexed variables; we regard the knapsack capacity as a time limit while considering that picking anitem i spends time s ( i ) . In the relaxation, we have a variable x ti ∈ [ , ] for each t ∈ [ T − ] and i ∈ I , and x ti = i is picked at time t . For each t ∈ [ T ] and i ∈ I , let ¯ x ti = Í t ′ ∈[ t − s ( i )] x t ′ i if t ≥ s ( i ) , and let ¯ x ti = t ∈ [ T ] , let ¯ x t be the | I | -dimensional vector whose component corresponding to i ∈ I is ¯ x ti . Let F : [ , ] I → R + be the multilinear extension of the submodular function f . Then, the relaxation is described asmaximize ¯ F ( x ) : = Í Tt = p ( t ) F ( ¯ x t ) subject to Í t ∈[ T − ] x ti ≤ , ∀ i ∈ I , Í i ∈ I Í t ′ ∈[ t ] x t ′ i min { s ( i ) , t } ≤ t , ∀ t = , . . . , T , x ti ≥ , ∀ t ∈ [ T − ] , ∀ i ∈ I . (1)Let us see that (1) relaxes the problem. It is not difficult to see that the first and the third constants are valid. We provethat the second constraint is valid. Suppose that x is an integer solution that corresponds to a sequence of items. Let I ′ be the set of items picked at time t or earlier in this solution. Notice that Í i ∈ I Í t ′ ∈[ t ] x t ′ i min { s ( i ) , t } = Í i ∈ I ′ min { s ( i ) , t } holds. Let j be the item picked latest in I ′ . Then, since the process of all items in I ′ \ { j } terminates by time t , wehave Í i ∈ I ′ \{ j } s ( i ) ≤ t . Therefore, Í i ∈ I ′ min { s ( i ) , t } = min { s ( j ) , t } + Í i ∈ I ′ \{ j } s ( i ) ≤ t .Note also that ¯ F is a smooth monotone submodular function; i.e., ∂ ¯ F ( x )/ ∂ x ti ≥ t ∈ [ T − ] and i ∈ I ,and ∂ ¯ F ( x )/( ∂ x ti ∂ x t ′ i ′ ) ≤ t , t ′ ∈ [ T − ] and i , i ′ ∈ I . Hence, we can apply the continuous greedy algorithmfor solving (1). Let x ∗ be an obtained feasible solution for (1). We first present a rounding algorithm for this solution.Since the formulation size of this relaxation is not polynomial, this part does not run in polynomial time. We convertthe algorithm into a polynomial-time one later. Rounding algorithm:
The algorithm consists of two rounds. In the first round, each item i chooses an integer t from [ T − ] with probability x ∗ ti /
4, and chooses no integer with probability 1 − Í t ∈[ T − ] x ∗ ti /
4. An item is discarded if itchooses no integer. Let I be the set of remaining items. For each i ∈ I , let t i denote the integer chosen by i .Then, the algorithm proceeds to the second round. For each i ∈ I , let J i denote { j ∈ I : t j ≤ t i } . In the secondround, item i is discarded if s ( J i ) ≥ t i . Let I denote the set of items remaining after the second round. The algorithmoutputs a sequence obtained by sorting the items i ∈ I in the non-decreasing order of t i , where ties are brokenarbitrarily, and by appending the other items after those in I in an arbitrary order.For t ∈ [ T ] , let I t = { i ∈ I : t i ≤ t − s ( i ) − } . If i ∈ I t , then i contributes to the objective value of the solutionwhen the knapsack capacity is at least t . Lemma 7.
For any t ∈ [ T ] , I t is the output of a monotone ( / , / ) -balanced contention resolution scheme for themaximization problem of f under the knapsack capacity t and the fractional solution ¯ x ∗ t . Hence, the sequence outputby the algorithm achieves an objective value of at least ¯ F ( x ∗ / )/ in expectation. roof. Take arbitrarily t ∈ [ T ] . We define a random mapping π : 2 I → I as follows. Let I ′ ⊆ I . We let each i ∈ I ′ independently sample an integer t ′ i from [ t − s ( i ) − ] with probability x ∗ t ′ i i / ¯ x ∗ ti . Define J ′ i = { j ∈ I ′ : t ′ j ≤ t ′ i } for each i ∈ I ′ . Then, π ( I ′ ) is defined as { i ∈ I ′ : s ( J ′ i ) < t ′ i } .Let us see that π is a ( / , / ) -balanced contention resolution scheme for ¯ x ∗ t . For this, we analyze the probabilitythat i is included in π ( R ¯ x ∗ t / ) , conditioned that i ∈ R ¯ x ∗ t / . Recall that i ∈ R ¯ x ∗ t / is not included in π ( R ¯ x ∗ t / ) if s ( J ′ i ) ≥ t ′ i Let j be an arbitrary item other than i , and let s ′ ( j ) = min { s ( j ) , t ′ i } . Notice that s ′ ( J ′ i ) ≥ t ′ i holds if s ( J ′ i ) ≥ t ′ i holds. Wegive an upper bound on the probability that s ′ ( J ′ i ) ≥ t ′ i happens. The item j ∈ I \{ i } is included in R ¯ x ∗ t / with probability¯ x ∗ t j /
4, and then it is included in J ′ i (i.e., j chooses an integer at most t ′ i ) with probability at most Í t ′ ∈[ t ′ i ] x ∗ t ′ j / ¯ x ∗ t j . Hence E [ s ′ ( J ′ i )] ≤ Í j ∈ I Í t ′ ∈[ t ′ i ] x ∗ t ′ j min { s ( j ) , t ′ i }/ ≤ t ′ i /
2, where the last inequality follows from the second constraint of(1). Applying Markov’s inequality, we obtain Pr [ s ′ ( J ′ i ) ≥ t ′ i ] ≤ /
2. Therefore, Pr [ i ∈ π ( R ¯ x ∗ t / ) | i ∈ R ¯ x ∗ t / ] ≥ / π is a ( / , / ) -balanced contention resolution scheme for ¯ x ∗ t .Next, we prove that π is monotone. Let I ′ ⊆ I ′′ ⊆ I . Then, Í j ∈ J ′ i s ( i ′ ) is not smaller in the computation of π ( I ′′ ) than in the computation of π ( I ′ ) , if each item in I ′ samples the same integer both in π ( I ′ ) and π ( I ′′ ) . This impliesPr [ i ∈ π ( I ′ )] ≥ Pr [ i ∈ π ( I ′′ )] for each i ∈ I ′ . Thus, π is monotone.Let us observe that I t coincides with π ( R ¯ x ∗ t / ) . Recall that, in the first round of the algorithm, each item i independently chooses an integer t i . The probability that t i ≤ t − s ( i ) − Í t ′ ∈[ t − s ( i )− ] x ∗ ti / = ¯ x ∗ ti /
4. Hence, item set { i ∈ I : t i ≤ t − s ( i )− } coincides with R ¯ x ∗ t / . Then, the decision of whether or not an item i in this set is discarded in thesecond round of the algorithm is the same as the computation of π ( R ¯ x ∗ t / ) . Therefore, I t coincides with π ( R ¯ x ∗ t / ) . (cid:3) By Theorem 1 and Lemma 7, our algorithm achieves (( − / √ e )/ − o ( )) -approximation if it is combined withthe continuous greedy algorithm with stopping time 1 / / + ǫ for any ǫ > T > i and j of size T , and an item k of size 1. The weight of these items are all 1, and the objective value is defined as the sum of the weights of chosenitems. Clearly a knapsack of capacity T can include at most one of the items, and hence the maximum objective valueof integer solutions is 1. On the other hand, a fractional solution defined by x i = x j = ( T − )/ T , x T − , k = − / T . Conversion into a polynomial-time algorithm:
We transform the pseudo-polynomial algorithm into the polynomial-time one. Let W = f ( I ) and w = min i ∈ I f ({ i }) . We assume w > f ({ i }) = i ∈ I , we can safely remove i from I because the submodularity implies f ( S ) = S ⊆ I with i ∈ S .Recall that we assume that the submodular function f is given as an oracle. Indeed, algorithms in this paper can beimplemented if we can compute the value of the function (or the value of its multilinear extension). We assume thatthe oracle is encoded in Ω ( log ( W / w )) , and hence we say that an algorithm runs in polynomial time if its running timeis expressed as a polynomial in log ( W / w ) .The idea for the conversion is to use a more compact relaxation, which is obtained by defining variables andconstraints for a polynomial number of integers in [ T ] . Let ǫ be a positive constant smaller than 1, and let η = ⌊ log − ǫ ( ǫ w /( W log T ))⌋ . For each t ∈ [ T − ] , let ¯ p ( t ) denote Í Tt ′ = t + p ( t ′ ) . We first define { τ , τ , . . . , τ q , τ q + } ⊆ [ T ] such that q = O ( log T + log ( W /( w ǫ ))) , τ = τ = τ q + = T , τ j < τ j + ≤ τ j holds for any j = , . . . , q , and thereexists q η ∈ { , . . . , q } satisfying the following conditions:• ¯ p ( τ j ) ≥ ¯ p ( τ j + − ) ≥ ( − ǫ ) ¯ p ( τ j ) for any j ∈ [ q η ] ;• ¯ p ( τ j ) < ǫ w /( W log T ) for any j ∈ { q η + , . . . , q } .Such a subset of [ T ] can be defined as follows. For j ∈ { , . . . , η + } , let τ ′ j be the minimum integer t ∈ [ T − ] suchthat ¯ p ( t ) < ( − ǫ ) j − . We assume without loss of generality that ¯ p ( min i ∈ I s ( i )) =
1, which means τ ′ ≥ min i ∈ I s ( i ) . Wedenote the set of positive integers in { τ ′ j : j = , . . . , η + } ∪ { j : j ∈ [⌈ log T ⌉ − ]} by { τ , . . . , τ q } , and sort thoseintegers so that 1 = τ < τ < · · · < τ q . We define q η so that τ q η = τ ′ η + . Then, the obtained subset satisfies the aboveconditions.In addition, we define the set of integers in { τ , . . . , τ q + } ∪ { τ j − s ( i ) + j ∈ { , . . . , q + } , i ∈ I } as { ξ , . . . , ξ r + } ,where 0 = ξ < ξ < . . . < ξ r < ξ r + = T . Notice that r = O ( n log T + n log ( W /( w ǫ ))) .Roughly speaking, we define variables for each ξ k ( k ∈ [ r ] ), and constraints for each τ j ( j ∈ [ q ] ). Specifically, avariable y ki is defined for each k ∈ [ r ] and i ∈ I , and y ki replaces variables x ξ k , i , . . . , x ξ k + − , i in (1). For j = , . . . , q + i ∈ I , we define an auxiliary variable z ji as Í ξ k + − ≤ τ j − s ( i ) y ki , and define z j as the | I | -dimensional vector whosecomponent corresponding to i ∈ I is z ji . Then, the compact relaxation is described as follows.maximize Í qj = ¯ p ( τ j )( F ( z j + ) − F ( z j )) subject to Í k ∈[ r ] y ki ≤ , ∀ i ∈ I , Í i ∈ I Í ξ k <τ j y ki min { s ( i ) , τ j } ≤ τ j , ∀ j ∈ { , . . . , q } , z ji = Í ξ k + − ≤ τ j − s ( i ) y ki ∀ j ∈ [ q ] , y ki ≥ , ∀ i ∈ I , ∀ k ∈ [ r ] . (2) Lemma 8.
The optimal objective value of (2) is not smaller than that of (1) .Proof.
Suppose that x is a feasible solution for (1). From x , we define a solution y for (2) so that y ki = Í ξ k + − t = ξ k x ti for each k ∈ [ r ] and i ∈ I . We define variables z from y by the third constraints of (2). Then, ( y , z ) is feasible to (2).Indeed, it is immediate from the feasibility of x in (1) that ( y , z ) satisfies the constraints of (2) except the second one.As for the second constraints, we can observe that Í ξ k <τ j y ki = Í t <τ j x ti holds for any i ∈ I and j ∈ { , . . . , q } by thedefinition of y and the fact that τ j is included in { ξ , . . . , ξ r } .Let us show that the objective value of ( y , z ) in (2) is not smaller than ¯ F ( x ) . We have¯ F ( x ) = T Õ t = p ( t ) F ( ¯ x t ) = ¯ p ( T − ) F ( ¯ x T ) + T − Õ t = ( ¯ p ( t − ) − ¯ p ( t )) F ( ¯ x t ) = T − Õ t = ¯ p ( t )( F ( ¯ x t + ) − F ( ¯ x t )) , where ¯ x denotes the zero-vector for convention. The right-hand side can be written as T − Õ t = ¯ p ( t )( F ( ¯ x t + ) − F ( ¯ x t )) = q Õ j = τ j + − Õ t = τ j ¯ p ( t )( F ( ¯ x t + ) − F ( ¯ x t ))≤ q Õ j = τ j + − Õ t = τ j ¯ p ( τ j )( F ( ¯ x t + ) − F ( ¯ x t )) = q Õ j = ¯ p ( τ j )( F ( ¯ x τ j + ) − F ( ¯ x τ j )) . (3)Recall that ¯ x τ j i = Í t ≤ τ j − s ( i ) x ti for each j ∈ [ q ] and i ∈ I . There exists k ′ ∈ [ r ] such that τ j − s ( i ) + = ξ k ′ , and hence Í t ≤ τ j − s ( i ) x ti can be written as Í ξ k − ≤ τ j − s ( i ) y ki = z ji . Thus ¯ x τ j = z j holds for each j ∈ [ q ] , implying that (3) is equalto the objective value of ( y , z ) . (cid:3) Lemma 9.
From a feasible solution to (2) achieving the objective value θ , we can construct a feasible solution to (1) achieving the objective value of at least ( − ǫ )( θ − ǫ w )/ .Proof. Let ( y , z ) be a feasible solution to (2). For each k ∈ [ r ] , i ∈ I , and t ∈ { ξ k , . . . , ξ k + − } , we define x ti as y ki /( ξ k + − ξ k ) .We prove that x / x that x satisfies the first and the thirdconstraints of (1). We focus on the second constraints. Let t ∈ { , . . . , T − } . Suppose that τ j ≤ t < τ j + holds forsome j ∈ [ q ] . Then, for each i ∈ I , we have Í t ′ ∈[ t ] x t ′ i ≤ Í t ′ <τ j + x t ′ i = Í ξ k <τ j + y ki , where the equality follows fromthe fact that τ j + ∈ { ξ , . . . , ξ r + } . Moreover, min { s ( i ) , t } ≤ min { s ( i ) , τ j + } also holds. Hence, Õ i ∈ I Õ t ′ ∈[ t ] x t ′ i min { s ( i ) , t } ≤ Õ i ∈ I Õ ξ k <τ j + y ki min { s ( i ) , τ j + } ≤ τ j + holds, where the last inequality follows from the second constraints of (2) when j < q , and from τ q + = T = Í i ∈ I s ( i ) and the first constraints of (2) when j = q . By its definition, t ≥ τ j ≥ τ j + /
2. This implies that x / θ be the objective value of ( y , z ) in (2). We show that the objective value ¯ F ( x / ) of x / ( − ǫ )( θ − ǫ w )/
2. We observe that¯ F (cid:16) x (cid:17) = T Õ t = p ( t ) F (cid:18) ¯ x t (cid:19) ≥ T Õ t = p ( t ) F ( ¯ x t ) = q Õ j = τ j + − Õ t = τ j ¯ p ( t ) ( F ( ¯ x t + ) − F ( ¯ x t )) . p ( t ) ≥ ¯ p ( τ j + − ) for any t ≤ τ j + −
1, the right-hand side of the above inequality satisfies12 q Õ j = τ j + − Õ t = τ j ¯ p ( t ) ( F ( ¯ x t + ) − F ( ¯ x t )) ≥ q Õ j = ¯ p ( τ j + − ) τ j + − Õ t = τ j ( F ( ¯ x t + ) − F ( ¯ x t )) = q Õ j = ¯ p ( τ j + − ) (cid:16) F (cid:0) ¯ x τ j + (cid:1) − F (cid:0) ¯ x τ j (cid:1)(cid:17) . Recall that ¯ x τ j = z j and ¯ p ( τ j + − ) ≥ ( − ǫ ) ¯ p ( τ j ) hold for any j ∈ [ q η ] . Therefore,12 q Õ j = ¯ p ( τ j + − ) (cid:16) F (cid:0) ¯ x τ j + (cid:1) − F (cid:0) ¯ x τ j (cid:1) (cid:17) ≥ − ǫ q η Õ j = ¯ p ( τ j ) (cid:0) F (cid:0) z j + (cid:1) − F (cid:0) z j (cid:1) (cid:1) . On the other hand, θ can be written as θ = q Õ j = ¯ p ( τ j )( F ( z j + ) − F ( z j )) = q η Õ j = ¯ p ( τ j )( F ( z j + ) − F ( z j )) + q Õ j = q η + ¯ p ( τ j )( F ( z j + ) − F ( z j )) . Recall that ¯ p ( τ j ) ≤ ǫ w /( W log T ) if j ≥ q η +
1. Moreover, we have q − q η ≤ log T , and hence q Õ j = q η + ¯ p ( τ j )( F ( z j + ) − F ( z j )) ≤ q Õ j = q η + ¯ p ( τ j ) W ≤ ǫ w . Combining all these discussion, we have ¯ F ( x / ) ≥ ( − ǫ )( θ − ǫ w )/ (cid:3) We now wrap up our algorithm. Our algorithm first applies the continuous greedy algorithm with stopping time1 / y such that 4 y is feasible for (2) and the objective value of y in (2) is 1 − / √ e times that ofany feasible solution, particularly the optimal value h of (2). From y , we compute a solution x for (1) by Lemma 9.We see that 4 x is feasible for (1), and the objective value of x in (1) is at least ( − ǫ )(( − / √ e ) h − ǫ w )/
2. Since weare assuming ¯ p ( min i ∈ I s ( i )) ≥
1, picking the item of the smallest size at time 0 achieves objective value w . This means h ≥ w , and hence the objective value of x is at least ( − ǫ )( − ǫ − / √ e ) h /
2. Then, applying the rounding algorithmto 4 x , we obtain a sequence of objective value ( − ǫ )( − / √ e − ǫ ) h / x . In the rounding algorithm,values of x are used for deciding t i for each i ∈ I in the first round of the rounding algorithm. This is possiblewithout writing down x as follows. Notice that x ti takes the same value for any t ∈ [ τ j , τ j + ) by the construction of x . Hence each i chooses t i as follows. First, i chooses k ∈ [ q ] with probability y ki , and is discarded with probability1 − Í k ∈[ r ] y ki . Then, i chooses t i from [ τ k , τ k + ) uniformly at random. This algorithm runs in polynomial time withrespect to 1 / ǫ and the input size of the instance. We give a pseudo-code of the algorithm in Algorithm 5.With the conversion given above, we obtain the following theorem. Theorem 9.
For any constant ǫ ∈ ( , ) , there exists a randomized approximation algorithm of approximation ratio ( − ǫ )( − ǫ − / √ e )/ − o ( ) ≈ . − ǫ for SMPSC , which runs in polynomial time with respect to / ǫ and theinput size of the instance. We considered the maximization problem of a nonnegative monotone submodular function under an unknown or astochastic knapsack constraint. We presented adaptive policies that achieve constant robustness ratios for an unknownknapsack constraint when the cancellation is allowed. For the case where the cancellation is not allowed, we presentedapproximation algorithms that achieve constant approximation ratios for a stochastic knapsack constraint.There still remain many interesting directions of further studies. We mention two of them here. First, even forKPUC with cancellation, there is still a gap between the best known upper and lower bounds on the robustness ratio ifwe consider randomized policies; the best known lower bound is 1 / lgorithm 5: Randomized algorithm of approximation ratio ( − ǫ )( − ǫ − / √ e )/ − o ( ) for ∀ j ∈ { , . . . , η + } do compute τ ′ j = arg min { t ∈ [ T − ] : ¯ p ( t ) < ( − ǫ ) j − } by the binary search compute τ , . . . , τ q + , q η , and ξ , . . . , ξ r + ; y ← output of the continuous greedy with stopping time 1 / I ′ ← ∅ ; for i ∈ I do choose a number k from [ q ] with probability y ki or I ′ ← I ′ ∪ { i } with probability 1 − Í k ∈[ q ] y ki ; if i < I ′ then choose an integer t i from [ τ k , τ k + ) uniformly at random; Π ′ ← sequence obtained by sorting the items i ∈ I \ I ′ in a non-decreasing order of t i ; Π ← ( Π ′ ) , l ← for i = , . . . , | Π ′ | do if Í j ∈[ i − ] s ( Π ′ j ) < t Π ′ i then Π l ← Π ′ i , l ← l + else I ′ ← I ′ ∪ { Π ′ i } ; append the items in I ′ to the suffix of Π arbitrarily, and return Π ;al. [6], and we give an upper bound 8 / / / / Acknowledgement:
The first author is supported by JSPS KAKENHI Grant Number JP16K16005. The secondauthor is supported by JSPS KAKENHI Grant Number JP17K12646 and JST ERATO Grant Number JPMJER1201,Japan. The third author is supported by JSPS KAKENHI Grant Number JP17K00040 and JST ERATO Grant NumberJPMJER1201, Japan.
References [1] M. Adamczyk, M. Sviridenko, and J. Ward. Submodular stochastic probing on matroids.
Math. Oper. Res. ,41(3):1022–1038, 2016.[2] A. Asadpour, H. Nazerzadeh, and A. Saberi. Stochastic submodular maximization. In
Internet and NetworkEconomics, 4th International Workshop, WINE 2008, Shanghai, China, December 17-20, 2008. Proceedings ,pages 477–489, 2008.[3] G. Călinescu, C. Chekuri, M. Pál, and J. Vondrák. Maximizing a monotone submodular function subject to amatroid constraint.
SIAM J. Comput. , 40(6):1740–1766, 2011.[4] M. Dabney. A PTAS for the uncertain capacity knapsack problem. Master’s thesis, Clemson University, 2010.[5] B. C. Dean, M. X. Goemans, and J. Vondrák. Approximating the stochastic knapsack problem: The benefit ofadaptivity.
Math. Oper. Res. , 33(4):945–964, 2008.[6] Y. Disser, M. Klimm, N. Megow, and S. Stiller. Packing a knapsack of unknown capacity.
SIAM J. DiscreteMath. , 31(3):1477–1497, 2017.[7] M. Feldman.
Maximization Problems with Submodular Objective Functions . PhD thesis, Technion – IsraelInstitute of Technology, July 2013. 188] M. Feldman, J. Naor, and R. Schwartz. A unified continuous greedy algorithm for submodular maximization.In
IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA,October 22-25, 2011 , pages 570–579, 2011.[9] M. Feldman, O. Svensson, and R. Zenklusen. Online contention resolution schemes. In
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12,2016 , pages 1014–1033, 2016.[10] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and stochasticoptimization.
J. Artif. Intell. Res. (JAIR) , 42:427–486, 2011.[11] A. Gupta, R. Krishnaswamy, M. Molinaro, and R. Ravi. Approximation algorithms for correlated knapsacks andnon-martingale bandits. In
IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011,Palm Springs, CA, USA, October 22-25, 2011 , pages 827–836, 2011.[12] A. Gupta, V. Nagarajan, and S. Singla. Adaptivity gaps for stochastic probing: Submodular and XOS functions.In
Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017,Barcelona, Spain, Hotel Porta Fira, January 16-19 , pages 1688–1702, 2017.[13] W. Höhn and T. Jacobs. On the performance of Smith’s rule in single-machine scheduling with nonlinear cost.
ACM Trans. Algorithms , 11(4):25:1–25:30, 2015.[14] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. M. VanBriesen, and N. S. Glance. Cost-effective outbreakdetection in networks. In
Proceedings of the 13th ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining, San Jose, California, USA, August 12-15, 2007 , pages 420–429, 2007.[15] W. Ma. Improvements and generalizations of stochastic knapsack and multi-armed bandit approximation al-gorithms: Extended abstract. In
Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on DiscreteAlgorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014 , pages 1154–1163, 2014.[16] N. Megow and J. Mestre. Instance-sensitive robustness guarantees for sequencing with unknown packing andcovering constraints. In
Innovations in Theoretical Computer Science, ITCS ’13, Berkeley, CA, USA, January9-12, 2013 , pages 495–504, 2013.[17] N. Megow and J. Verschae. Dual techniques for scheduling on a machine with varying speed. In
Automata,Languages, and Programming - 40th International Colloquium, ICALP 2013, Riga, Latvia, July 8-12, 2013,Proceedings, Part I , pages 745–756, 2013.[18] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint.
Oper. Res.Lett. , 32(1):41–43, 2004.[19] A. C. Yao. Probabilistic computations: Toward a unified measure of complexity (extended abstract). In18thAnnual Symposium on Foundations of Computer Science, Providence, Rhode Island, USA, 31 October - 1November 1977