Maximizing Social Welfare Subject to Network Externalities: A Unifying Submodular Optimization Approach
aa r X i v : . [ c s . G T ] F e b Maximizing Social Welfare Subject to Network Externalities: A UnifyingSubmodular Optimization Approach
S. Rasoul Etesami
Department of Industrial and Systems Engineering & Coordinated Science LabUniversity of Illinois at Urbana-Champaign, Urbana, IL 61801. ([email protected])
Abstract — We consider the problem of maximizing socialwelfare by allocating indivisible items to a set of agents subjectto network externalities. We first provide a general formulationthat captures some of the known models as a special case. Wethen show that the social welfare maximization problem benefitssome nice sub-or supermodular properties. That allows us todevise simple polynomial-time approximation algorithms usingLov´asz extension and multilinear extension of the objectivefunction. Our principled approach recovers or improves some ofthe existing algorithms and provides a simple unifying methodfor maximizing social welfare subject to network externalities.
I. I
NTRODUCTION
One of the major challenges in resource allocation overnetworks is the existence of externalities among the agents.Network externality (also called network effect) is the effectthat one user of a good or service has on the product’s valueto other people. Externalities exist in many network systemssuch as social, economic, and cyber-physical networks andcan substantially affect resource allocation strategies andoutcomes. In fact, due to the rapid proliferation of onlinesocial networks such as Facebook, Twitter, and LinkedIn,the magnitude of such network effects has been increased toan entirely new level [1]. Here are just a few examples.
Allocation of Networked Goods : Many goods have highervalues when used in conjunction with others [2]. For in-stance, people often derive higher utility when using thesame product such as cellphones or game consoles. Onereason is that companies often provide extra benefits forthose who adopt their products. Another reason is that theusers who buy the same product can share many benefits,such as installing similar Apps or exchanging video games.Such products are often referred to as networked goods andare said to exhibit positive network externalities.
Cyber-Physical Network Security : An essential task incyber-physical security is that of providing a resource al-location mechanism for securing the operation of a set ofnetworked agents (e.g., service providers, computers, or datacenters) despite external malicious attacks. One way of doingthat is to distribute limited security resources among theagents efficiently (e.g., by installing antivirus software ona subset of servers) [3]. However, since the agents are inter-connected, the compromise of one agent puts its neighborsat higher risk, and such a failure can cascade over the entire
This work is supported by the National Science Foundation CAREERAward under Grant No. EPCN-1944403. network. As a result, a decision on how much to invest inone agent will indirectly affect all the others. Therefore, anefficient allocation of scarce security resources subject tonetwork externalities among the agents is a major challenge.
Network Congestion Games : There are many instancesof socioeconomic systems such as transportation or facilitylocation networks in which the utility of agents decreasesas more agents use the same resource [4], [5]. For instance,as more drivers (agents) use the same road (resource), thetraffic congestion on that road will increase, hence increasingthe travel time and energy consumption for the drivers. Suchnetwork effects are often referred to as negative externalitiesand have been studied under the general framework ofcongestion games under both centralized or game-theoreticsettings [4]–[8].Motivated by the above, and many other similar examples,our objective in this paper is to study allocation problemswhen agents exhibit network externalities. While this prob-lem has been significant in the past literature [2], [9]–[11],these results are mainly focused on allocating and pricingof copies of a single item; other than a handful of results[4], [12], [13], the problem of maximizing the social welfareby allocating multiple items subject to network externalitieshas not been well-studied before. Therefore, we consider themore realistic situation with multiple competing items, andwhen the agents in the network are demand-constrained.
A. Related Work
There are many papers that consider resource allocationunder various network externality models. For example,negative externalities have been studied in routing [5], facil-ity location [4], welfare maximization in congestion games[6], and monopoly pricing over a social network [11]. Onthe other hand, positive externalities have been addressedin the context of mechanism design and optimal auctions[2], [13], congestion games with positive externalities [6],[12], and pricing networked goods [9], [14]. There are alsosome results that consider unrestricted externalities where amixture of both positive and negative externalities may existin the network [6], [15]. However, those results are oftenfor simplified anonymous models in which the agents do notcare about the identity of the other agents that share theresource with them. One reason is that for unrestricted andnon-anonymous externalities, maximizing the social welfarewith n agents is n − ǫ -inapproximable for any ǫ > [6],12]. Therefore, in this work, we consider the problem ofmaximizing the social welfare with non-anonymous agentsbut with either positive or negative externalities.Optimal resource allocation subject to network effectsare typically NP-hard and even hard to approximate [2],[4], [6], [16]. Therefore, a large body of past literaturehas been devoted to devising polynomial-time approximationalgorithms with a good performance guarantee. Maximizingsocial welfare subject to network externalities can oftenbe cast as a special case of a more general combinatorialwelfare maximization problems [17]. However, combinato-rial welfare maximization with general valuation functionsis hard to approximate to within a factor better than √ n ,where n is the number of items [18]. Therefore, to obtainimproved approximation algorithms for the special case ofwelfare maximization with network externalities, one mustrely on more tailored algorithms taking into account thespecial structure of the agents’ utility functions.Another closely related problem on resource allocationunder network effects is submodular optimization [19], [20].The reason is that utility functions of the networked agentsoften exhibit diminishing return property as more agentsadopt the same product. That property makes a varietyof submodular optimization techniques quite amenable todesign improved approximation algorithms. While this con-nection has been studied in the past literature for the specialcase of a single item [2], it has not been discovered forthe more complex case of multiple items. As we willshow in this paper, there is a close relation between multi-item welfare maximization under network externalities, andthe so-called submodular cost allocation [21], [22], multi-agent submodular optimization [23], and a generalization ofboth known as multivariate submodular optimization [24].However, adapting those general frameworks naively to ourproblem setting can only deliver poly-logarithmic approxi-mation factors, and that is the best one can hope for (as nearlymatching logarithmic lower bounds are known). Instead, wewill use the special structure of the agents’ utility functionsand new ideas from submodular optimization to provideimproved constant approximation algorithms for maximizingthe social welfare subject to network externalities. B. Contributions and Organization
We consider a general model for multi-item welfare max-imization subject to network externalities. We first show thatour model subsumes some of the existing models in theliterature. By making a connection to multi-agent submodu-lar optimization, we provide a unifying method for devisingimproved approximation algorithms for those problems witharguably much simpler algorithms and analysis.The paper is organized as follows. In Section II, weformally introduce the problem of multi-item social welfaremaximization subject to network externalities. In SectionIII, we provide some preliminary results from submodularoptimization for later use. In Section IV, we consider aspecial case of linear externality functions and provide a 2-approximation algorithm for that problem. In Section V, we extend our results to polynomial and more general convex ex-ternality functions and devise improved approximation algo-rithms in terms of the curvature of the externality functions.In Section VI, we consider the problem of maximum socialwelfare under negative linear externalities. We conclude thepaper in Section VII.II. P
ROBLEM F ORMULATION
Consider a set of [ n ] := { , . . . , n } agents and a setof [ m ] := { , . . . , m } different items. There are unlimitedcopies of each item; however, each agent can receive at mostone item. For any ordered pair of agents ( j, k ) and any item i ,there is a nonnegative weight a ijk ≥ indicating the amountby which the utility of agent j gets influenced from agent k , given that both agents j and k receive the same item i . Let S i denote the set of agents that receive item i . Forany j ∈ S i , the utility that agent j derives from such anallocation is given by f ij (cid:0) P k ∈ S i a ijk (cid:1) , where f ij : R + → R + , f ij (0) = 0 is a nondecreasing nonnegative function. Incase that the functions f ij are convex, we refer to the themas convex externalities . The goal is to assign exactly oneitem to each agent in order to maximize the social welfare.In other words, we want to find a partition S , . . . , S m ofagents (i.e., ∪ mi =1 S i = [ n ] , S i ∩ S i ′ = ∅ ∀ i = i ′ ) to maximizethe sum of agents’ utilities given by max S ,...,S m n X i =1 X j ∈ S i f ij (cid:0) X k ∈ S i a ijk (cid:1) . (1)Using the fact that f ij (0) = 0 , ∀ i, j , the maximum socialwelfare problem (1) can also be formulated as the followinginteger program (IP): max X i,j f ij (cid:16) n X k =1 a ijk x ji x ki (cid:17) m X i =1 x ji = 1 ∀ j ∈ [ n ] ,x ji ∈ { , } ∀ i ∈ [ m ] , j ∈ [ n ] , (2)where x ji = 1 if and only if item i is assigned to agent j . Example 1:
For the special case where f ij ( y ) = y, ∀ i, j ,the objective function in (1) becomes P i P ( j,k ) ∈ S i a ijk ,hence recovering the optimization problem studied in [12].We refer to such externality functions as linear externalities . Example 2:
Let G = ([ n ] , E ) be a fixed directed graphamong the agents, and denote the set of out neighbors ofagent j by N j . In a special case when each agent treatsall of its neighbors equally, i.e., for every i , a ijk = 1 if k ∈ N j and a ijk = 0 otherwise, the objective function in (1)becomes P i P j ∈ S i f ij (cid:0) | S i ∩ N j | (cid:1) . Therefore, we recover thesocial welfare maximization problem studied in [13]. For thisspecial case, it was shown in [13, Theorem 3.9] that whenthe externality functions are convex and bounded above by apolynomial of degree d , one can find an O ( d ) -approximationfor the optimum social welfare allocation. In this work, wewill improve this result for the more general setting of (1).II. P RELIMINARY R ESULTS
This section provides some definitions and preliminaryresults, which will be used later to establish our main results.We start with the following definition.
Definition 1:
Given a ground set N , a set function f :2 N → R is called submodular if and only if f ( A ) + f ( B ) ≥ f ( A ∪ B ) + f ( A ∩ B ) for any A, B ⊆ N . Equivalently, f ( · ) is submodular if for any two nested subsets A ⊆ B and i / ∈ B , we have f ( A ∪ { i } ) − f ( A ) ≥ f ( B ∪ { i } ) − f ( B ) . Aset function f is called supermodular if − f is submodular.Finally, f is called monotone if f ( A ) ≤ f ( B ) for A ⊆ B . A. Lov´asz Extension
Let N be a ground set of cardinality n . Each real-valuedset function on N corresponds to a function f : { , } n → R over the vertices of hypercube { , } n , where each subset isrepresented by its binary characteristic vector. Therefore, byabuse of notation, we use f ( S ) and f ( χ S ) interchangeablywhere χ S ∈ { , } n is the characteristic vector of the set S ⊆ N . The Lov´asz extension of f to the continuous unitecube [0 , n , denote by f L ( x ) : [0 , n → R , is defined by f L ( x ) := E θ [ f ( x θ )] = Z f ( x θ ) dθ, (3)where θ ∈ [0 , is a uniform random variable, and x θ fora given vector x ∈ [0 , n is defined as: x θi = 1 if x i ≥ θ ,and x θi = 0 , otherwise. In other words, x θ is a randombinary vector obtained by rounding all the coordinates of x that are above θ to , and the remaining ones to . Inparticular, f L ( x ) is equal to the expected value of f at therounded solution x θ , where the expectation is with respectto the randomness introduced by θ . It is known that theLov´asz extension f L is a convex function of x if and onlyif the corresponding set function f is submodular [25]. Thisproperty makes the Lov´asz extension a suitable continuousrelaxation for minimizing a submodular set function. B. Multilinear Extension
As mentioned earlier, the Lov´asz extension provides a con-vex continuous extension of a submodular function, which isnot very useful for maximizing a submodular function. Forthe maximization problem, one can instead consider anothercontinuous extension known as multilinear extension . Themultilinear extension of the set function f : 2 N → R at agiven vector x ∈ [0 , n , denoted by f M ( x ) , is given by theexpected value of f at a random set R ( x ) which is sampledfrom the ground set N by including each element i to R ( x ) independently with probability x i , i.e., f M ( x ) = E (cid:2) f (cid:0) R ( x ) (cid:1)(cid:3) = X R ⊆ N f ( R ) Y i ∈ R x i Y i/ ∈ R (1 − x i ) . One can show that the Lov´asz extension is always alower bound for the multilinear extension, i.e., f L ( x ) ≤ f M ( x ) , ∀ x ∈ [0 , n . Moreover, at any binary vector x ∈{ , } n , we have f L ( x ) = f ( x ) = f M ( x ) . In general,the multilinear extension of a submodular function is neitherconvex nor concave. However, it is known that a so-called continuous greedy algorithm can approximately maximize inpolynomial time the multilinear extension of a nonnegativesubmodular function subject to certain class of constraints. Lemma 1: [26, Theorems I.1 & I.2] For any non-negative submodular function f : 2 N → R + , down-monotone solvable polytope P ⊆ [0 , n , there is apolynomial-time continuous greedy algorithm that finds apoint x ∗ ∈ P such that f M ( x ∗ ) ≥ e f ( OP T ) , where OP T is the optimal integral solution to the maximization prob-lem max x ∈P∩{ , } n f M ( x ) . If in addition, the submodularfunction f is monotone, the approximation guarantee can beimproved to f M ( x ∗ ) ≥ (1 − e ) f ( OP T ) .According to the above lemma, the multilinear extensionprovides a suitable relaxation for devising an approximationalgorithm for submodular maximization. The reason is thatone can first approximately solve the relaxed multilinearextension in polynomial time and then round the solutionto obtain an approximate integral feasible solution. C. Fair Contention Resolution
Here, we provide some background on a general random-ized rounding scheme known as fair contention resolution that allows one to round a fractional solution to an integralone while preserving specific properties. Intuitively, givena fractional solution to a resource allocation problem, oneideally wants to round the solution to an integral allocationsuch that each item is allocated to only one agent. However,a natural randomized rounding often does not achieve thatproperty as multiple agents may receive the same item. Toresolve that issue, one uses a “contention resolution scheme,”which determines which agent should receive the item whilelosing at most a constant factor in the objective value.More precisely, suppose n players compete for an itemindependently with probabilities p , p , . . . , p n . Denote by A the random set of players who request the item in the firstphase, i.e., P ( i ∈ A ) = p i independently for each i . In thesecond phase, If | A | ≤ , we do not make any change to theallocation. Otherwise, allocate the item to each player k ∈ A who requested the item in the first phase with probability r iA = 1 P i p i (cid:16) X k ∈ A \{ i } p k | A | − X k / ∈ A p k | A | (cid:17) . Note that for any A = ∅ , we have P i ∈ A r iA = 1 , so thatafter the second phase, the item is allocated to exactly oneplayer with probability . The importance of such a faircontention resolution is that if the item was requested inthe first phase by a player, then after the second phase, thatplayer still receives the item with probability at least − e .More precisely, it can be shown that [27]: Lemma 2: [27, Lemma 1.5] Conditioned on player k requesting the item, she obtains it with probability exactly − Q ni =1 (1 − p i ) P ni =1 p i ≥ − e . A polytope
P ⊆ [0 , n is solvable if linear functions can be maximizedover it in polynomial time. It is down-monotone if x ∈ P and ≤ y ≤ x (coordinatewise) implies that y ∈ P . V. M
AXIMIZING S OCIAL W ELFARE WITH P OSITIVE L INEAR E XTERNALITIES
In this section, we will consider the social welfare max-imization problem with positive linear externalities (see,Example 1). This problem was first introduced and studiedby [12], where the authors provided a -approximation algo-rithm for maximizing the social welfare by solving a linearprogram with O ( n m ) variables and an elegant iterativegreedy algorithm. However, the analysis in [12] are relativelyinvolved and seem somewhat ad hoc. In particular, it isnot clear how one can extend those results to the case ofnonlinear externalities.Here, we establish the same approximation guaranteeusing a natural nonlinear program for the social welfaremaximization problem with positive linear externalities. Byshowing that the objective function is a monotone super-modular function, we provide a 2-approximation algorithmby solving a concave program with O ( nm ) variables basedon the Lov´asz extension of the objective function. Wethen show that a well-known randomized rounding algo-rithm for submodular minimization can easily return a 2-approximation solution. Therefore, we obtain a very naturaland principled method for devising a constant approximationalgorithm. In particular, our analyses are arguably simplerthan those in [12] and provide a clear explanation on whythe iterative greedy algorithm performs well by making anovel connection to submodular optimization. This approachallows us to substantially extend those results to more generalconvex externalities using the same principled framework(even with additional constraints). Interestingly, one canshow that derandomization of our randomized algorithm forthe case of positive linear externalities recovers the iterativegreedy algorithm developed in [12].For the case of linear externalities f ij ( y ) = y, ∀ i, j , theIP (2) can be written in the form of the following IP: max n m X i =1 f i ( x i ) : m X i =1 x i = , x i ∈ { , } n , ∀ i o , (4)where for any i ∈ [ m ] , we define x i to be the column vector x i = ( x i , . . . , x ni ) ′ , and f i ( x i ) := x ′ i A i x i is a quadraticfunction with A i := ( a ijk ) ∈ R n × n + . Moreover, we use torefer to a column vector of all ones.Since all the entries of A i are nonnegative, it is easy to seethat f i ( x i ) : { , } n → R + is a nonnegative and monotonesupermodular set function. Now let f Li ( x i ) : [0 , n → R + be the continuous Lov´asz extension of f i ( x i ) from thehypercube { , } n to the unite cube [0 , n . Since − f i ( x i ) issubmodular, the function − f Li ( x i ) is convex, which impliesthat f Li ( x i ) is a nonnegative concave function. In fact, using(3) and the quadratic structure of f i ( x i ) = x ′ i A i x i , one canderive the following closed form for f Li ( x i ) : f Li ( x i ) = E θ [ X j,k a ijk x θji x θki ] = X j,k a ijk E θ [ x θji x θki ]= X j,k a ijk P { x θji = 1 , x θki = 1 } = X j,k a ijk min { x ji , x ki } . From the above relation, it is easy to see that for any i ∈ [ m ] ,the function f Li ( x i ) is indeed a nonnegative monotone con-cave function. As the objective function in (4) is separableacross variables x i , i ∈ [ m ] , by linearity of expectation P mi =1 f Li ( x i ) equals to the Lov´asz extension of the objectivefunction in (4), which is also a concave function. Therefore,we obtain the following concave relaxation for the IP (4)whose optimal value upper-bounds that of (4). max n m X i =1 f Li ( x i ) : m X i =1 x i = , x i ≥ , ∀ i ∈ [ m ] o . (5)By abuse of notation, let us denote the optimal (fractional)solution to (5) that can be found in polynomial time by x ,where x ∈ R nm + is an n × m matrix whose i th columnis given by x i . Next, we provide a simple algorithm toround x to a feasible integral solution to (4), whose expectedobjective value is at least of the optimum value in (5). A. An Iterative Randomized Rounding Algorithm
Here, we show that a slight variant of the randomizedrounding algorithm derived from the work of Kleinbergand Tardos (KT) for metric labeling [28] provides a 2-approximation for the IP (4) when applied to the solutionof (5). The rounding scheme is summarized in Algorithm 1.
Algorithm 1
Iterative KT Rounding Algorithm • Let x be the optimal solution to the Lov´asz relaxation (5). • During the course of the algorithm, let S be the set ofallocated agents and S i be the set of agents that are allocateditem i . Initially set S = ∅ and S i = ∅ , ∀ i . • While S = [ n ] , pick i ∈ [ m ] , θ ∈ [0 , uniformly atrandom. Let S θi := { j ∈ [ n ] \ S : x ji ≥ θ } , and update S i ← S i ∪ S θi and S ← S ∪ S θi . • Return S , . . . , S m .Algorithm 1 proceeds in several rounds until all the agentsare assigned an item. At each round, the algorithm selectsa random item I ∈ [ m ] and a random subset of unassignedagents S θI ⊆ [ n ] \ S , and assign item I to the agents in S θI . Theorem 1:
Algorithm 1 is a -approximation algorithmfor the IP (4) with positive linear externalities. Proof:
We show that the expected value of the solutionreturned by Algorithm 1 is at least f L ( x ) , where f L ( x ) := P mi =1 f Li ( x i ) , and x is the optimal solution to (5). To thatend, let A := S θI be the random set of agents that are selectedduring the first round of the algorithm. It is enough to show E [ f I ( A )] ≥ E [ f L ( x ) − f L ( x | ¯ A )] , (6)where f L ( x | ¯ A ) denotes the value of the Lov´asz extension f L when restricted to the rows of x corresponding to the agents j ∈ ¯ A := [ n ] \ A . In other words, inequality (6) shows thatthe expected utility of the agents assigned during the firstround is at least of the expected value that those agents Equivalently, f L ( x | ¯ A ) is equal to evaluating f L at a solution that isobtained from x by setting all the rows j ∈ A to . ractionally contribute to the Lov´asz extension. If we canshow (6), the theorem then follows by an induction on thenumber of agents and using superadditivity of f i ( · ) . Moreprecisely, given A = ∅ , let S , . . . , S m be the (random) setsreturned by the algorithm when applied on the remaining agents in ¯ A . As | ¯ A | < n , using the induction hypothesis onthe remaining set of agents ¯ A , we have E [ X i f i ( S i ) | A ] ≥
12 max y = , y ≥ f L ( y ) ≥ f L ( x | ¯ A ) , (7)where y ∈ R | ¯ A |× m + is a variable restricted only to the agentsin ¯ A , and the second inequality holds because x | ¯ A is afeasible solution to the middle maximization. Now, we have E [Alg] = E [ X i = I f i ( S i ) + f I ( S I ∪ A )] ≥ E [ X i f i ( S i ) + f I ( A )] = E (cid:2) E [ X i f i ( S i ) + f I ( A ) | A ] (cid:3) = E (cid:2) E [ X i f i ( S i ) | A ] + f I ( A ) (cid:3) ≥ E [ f L ( x | ¯ A )] + 12 E [ f L ( x ) − f L ( x | ¯ A )] = 12 f L ( x ) , where the first inequality is by superadditivity of the func-tions f i ( · ) , and the last inequality holds by (6) and (7).Finally, to establish (6), we can write E [ f I ( A )] = 1 m m X i =1 E θ [ f i ( S θi )] = 1 m f L ( x ) . (8)Let f L ( x j , x k ) = P mi =1 a ijk min { x ji , x ki } be the restrictionof f L to the rows x j and x k of the solution x . Note thatby the definition we have f L ( x ) = P j,k f L ( x j , x k ) and f L ( x | ¯ Sθi ) = P j,k ∈ ¯ S θi f L ( x j , x k ) . Thus, we can write E [ f L ( x ) − f L ( x | ¯ A )] = 1 m m X i =1 E [ f L ( x ) − f L ( x | ¯ Sθi )]= 1 m m X i =1 X j,k P { j ∈ S θi or k ∈ S θi } f L ( x j , x k )= 1 m m X i =1 X j,k P { θ ≤ max { x ji , x ki }} f L ( x j , x k )= 1 m m X i =1 X j,k max { x ji , x ki } f L ( x j , x k )= 1 m X j,k (cid:16) m X i =1 max { x ji , x ki } (cid:17) f L ( x j , x k ) ≤ m X j,k f L ( x j , x k ) = 2 m f L ( x ) , (9)where the second equality in (9) holds because a pair ofrows x j , x k contribute exactly f L ( x j , x k ) to the objective f L ( x ) − f L ( x | ¯ Sθi ) if at least one of j or k belong to S θi , andcontribute , otherwise. Moreover, the last inequality holdsbecause P mi =1 max { x ji , x ki } ≤ P mi =1 ( x ji + x ki ) = 2 . Bycombining (9) and (8), we obtain (6). V. M AXIMIZING S OCIAL W ELFARE WITH M ONOTONE C ONVEX E XTERNALITIES
Here, we extend our results by considering the socialwelfare problem with general monotone convex functions.
Lemma 3:
Given nondecreasing convex functions f ij : R + → R + , f ij (0) = 0 , the objective function in (1) is anondecreasing and nonnegative supermodular set function. Proof:
Let f i ( S i ) := P j ∈ S i f ij (cid:0) P k ∈ S i a ijk (cid:1) , andnote that the objective function in (1) can be written in aseparable form as P mi =1 f i ( S i ) . Thus, it is enough to showthat each f i ( · ) is a monotone supermodular set function. Themonotonicity and nonnegativity of f i immediately followsfrom nonnegativity of weights a ijk , and monotonicity andnonnegativity of f ij , ∀ j ∈ [ n ] . To show supermodularity of f i , for any A ⊆ B, ℓ / ∈ B , we can write f i ( B ∪ { ℓ } ) − f i ( B )= X j ∈ B ∪{ ℓ } f ij (cid:0) X k ∈ B ∪{ ℓ } a ijk (cid:1) − X j ∈ B f ij (cid:0) X k ∈ B a ijk (cid:1) = X j ∈ B (cid:16) f ij (cid:0) X k ∈ B ∪{ ℓ } a ijk (cid:1) − f ij (cid:0) X k ∈ B a ijk (cid:1)(cid:17) + f iℓ (cid:0) X k ∈ B ∪{ ℓ } a iℓk (cid:1) ≥ X j ∈ A (cid:16) f ij (cid:0) X k ∈ B ∪{ ℓ } a ijk (cid:1) − f ij (cid:0) X k ∈ B a ijk (cid:1)(cid:17) + f iℓ (cid:0) X k ∈ A ∪{ ℓ } a iℓk (cid:1) ≥ X j ∈ A (cid:16) f ij (cid:0) X k ∈ A ∪{ ℓ } a ijk (cid:1) − f ij (cid:0) X k ∈ A a ijk (cid:1)(cid:17) + f iℓ (cid:0) X k ∈ A ∪{ ℓ } a iℓk (cid:1) = f i ( A ∪ { ℓ } ) − f i ( A ) . (10)Here, the first inequality holds by the monotonicity offunctions f ij and by A ⊆ B (note that each of the summandsis nonnegative). The second inequality in (10) follows fromconvexity of the functions f ij . More precisely, given any j ∈ A , let P k ∈ B \ A a ijk = d , P k ∈ A ∪{ ℓ } a ijk = p , and P k ∈ A a ijk = q , where we note that p ≥ q . By convexityof f ij we have f ij ( p + d ) − f ij ( p ) ≥ f ij ( q + d ) − f ij ( q ) , orequivalently f ij ( p + d ) − f ij ( q + d ) ≥ f ij ( p ) − f ij ( q ) , wherethe latter is exactly the second inequality in (10). A. Convex Polynomial Externalities of Bounded Degree
Here, we show that if the externality functions f ij canbe represented (or uniformly approximated) by nonnegative-coefficient polynomials of degree less than d , then Algorithm1 is a d -approximate to the maximum social welfare problem. Theorem 2:
Let each f ij be a polynomial with nonnega-tive coefficients of degree less than d . Then Algorithm 1 is a d -approximation algorithm for the welfare maximization (1). Proof:
As each f ij is a convex and nondecreasingfunction, using Lemma 3, the objective function in (1) isa monotone supermodular function. Therefore, the Lov´aszextension relaxation of the objective function (1) given by f L ( x ) = P i f Li ( x i ) is a concave function, where f Li ( x i ) = E θ [ f i ( x θi )] = E θ h X j f ij (cid:0) X k a ikj x θki x θji (cid:1)i . (11)As a result, one can solve the Lov´asz extension relaxationof the maximum social welfare problem using the corre-sponding concave program (5), whose optimal solution, againenoted by x , can be found in polynomial time. Since each x θji in (11) is a binary random variable, we have ( x θji ) t = x θji , ∀ t ≥ . As each f ij is a polynomial of degree less than d with nonnegative coefficients, after expanding all the termsin (11), there are nonnegative coefficients b ij ...j r such that X j f ij (cid:0) X k a ijk x θki x θji (cid:1) = d X r =2 X j ,...,j r b ij ...j r ( x θj i · · · x θj r i ) . Note that we may assume f ij does not have any constantterm as it does not affect the optimization. Taking expectationfrom the above relation and summing over all i , we get f L ( x ) = d X r =2 X j ,...,j r f L ( x j , . . . , x j r ) , (12)where f L ( x j , . . . , x j r ) := P i b ij ...j r min { x j i , . . . , x j r i } for ≤ r ≤ d . Now, using the same argument as inTheorem 1, let A = S θI be the random set obtained during thefirst round of Algorithm 1. Then, the term f L ( x j , . . . , x j r ) contributes to the expected value E [ f L ( x ) − f L ( x | ¯ A )] if atleast one of the agents j , . . . , j r belong to A . Therefore,using (12) and linearity of expectation, we can write E [ f L ( x ) − f L ( x | ¯ A )] = 1 m m X i =1 E [ f L ( x ) − f L ( x | ¯ Sθi )]= 1 m m X i =1 d X r =2 X j ,...,j r P {∪ rℓ =1 { j ℓ ∈ S θi }} f L ( x j , . . . , x j r )= 1 m m X i =1 d X r =2 X j ,...,j r max { x j i , . . . , x j r i } f L ( x j , . . . , x j r )= 1 m d X r =2 X j ,...,j r (cid:16) m X i =1 max { x j i , . . . , x j r i } (cid:17) f L ( x j , . . . , x j r ) ≤ m d X r =2 X j ,...,j r rf L ( x j , . . . , x j r ) ≤ dm d X r =2 X j ,...,j r f L ( x j , . . . , x j r )= dm f L ( x ) = dm m X i =1 E θ [ f i ( S θi )] = d · E [ f I ( A )] . In the above derivations, the first inequality holds because P i max { x j i , . . . , x j r i } ≤ P i P rℓ =1 x j ℓ i = r , and the sec-ond inequality holds because all the terms f L ( x j , . . . , x j r ) are nonnegative. This shows that after the first round ofAlgorithm 1, we have E [ f I ( A )] ≥ d E [ f L ( x ) − f L ( x | ¯ A )] .Now, by following the same argument as in the first part ofthe proof of Theorem 1, one can see that the expected valueof the Algorithm 1 is at least E [Alg] ≥ d f L ( x ) . Remark 1:
For convex polynomial externalities of degreeat most d , the d -approximation guarantee of Theorem 2 isan exponential improvement over the O ( d ) -approximationguarantee given in [13, Theorem 3.9]. B. Monotone Convex Externalities of Bounded Curvature
In this section, we provide an approximation algorithm forthe maximum social welfare with general monotone convexexternalities. Unfortunately, for general convex externalities,the Lov´asz extension of the objective function does notadmit a closed-form structure. For that reason, we developan approximation algorithm whose performance guaranteedepends on the curvature of the externality functions.
Definition 2:
Given α ∈ (0 , , we define the α -curvatureof a nonnegative nondecreasing convex function h : R + → R + as Γ hα := inf z ≥ h ( αz ) h ( z ) , where we note that Γ hα ∈ [0 , . Theorem 3:
The maximum social welfare problem (1)with nondecreasing convex externality functions f ij admitsa Γ − - approximation algorithm, where Γ := min i,j Γ f ij . Proof:
Using Lemma 3 the social welfare maximization(1) with monotone convex externality functions can be castas the supermodular maximization problem (4). Relaxingthat problem via Lov´asz extension, we obtain the concaveprogram (5), whose optimal solution, denoted by x , can befound in polynomial time. We round the optimal fractionalsolution x to an integral one ˆ x using the two-stage faircontention resolution scheme. It is instructive to think aboutthe rounding process as a two-stage process for rounding thefractional n × m matrix x . In the first stage, the columns arerounded independently, and in the second stage, the rows ofthe resulting solution are randomly revised to create the finalintegral solution ˆ x , satisfying the partition constraints in (4)with probability 1. We note that although ˆ x is correlatedacross its columns (due to the row-wise rounding), becausethe objective function in (4) is separable across the columnsvariables, using linearity of expectation and regardless of therounding scheme we have E [ P i f i (ˆ x i )] = P i E [ f i (ˆ x i )] .The first stage of the rounding simply follows from thedefinition of the Lov´asz extension: for each i ∈ [ m ] pickan independent uniform random variable θ i ∈ [0 , , and for j ∈ [ n ] let x θ i ji = 1 if x ji ≥ θ i , and x θ i ji = 0 , otherwise.Then, from the definition of the Lov´asz extension, we have P i E [ f i ( x θ i i )] = P i f Li ( x i ) . Thus, after the first stage weobtain a binary random matrix x θ = [ x θ | . . . | x θ m m ] whoseexpected objective value equals to the maximum value of theconcave relaxation (5). Unfortunately, after the first phase,the rounded solution may not satisfy the partition constraintsas multiple items may be assigned to the same agent j .To resolve that issue, in the second stage we round x θ to ˆ x by separately applying the fair contention resolutionto every row of x θ . More precisely, for each j ∈ [ n ] ,let A j := { i : x θ i ji = 1 } be a random set denoting thepositions in the j -th row of x θ that are rounded to inthe first phase of rounding. If | A j | ≤ , we do nothing.Otherwise, we set all the entries of the j th row to 0 exceptthe i th entry, where i is selected from A j with probability r iA j = P i ′ ∈ A j \{ i } x ji ′ | A j |− + P i ′ / ∈ A j x ji ′ | A j | . Thus, after thesecond stage of rounding, ˆ x is a feasible solution to (4).Since θ i , ∀ i ∈ [ m ] are uniform independent randomvariables, for any j ∈ [ n ] , P { x θ i ji = 1 } = P { θ i ≤ x ji } = x ji .Therefore, for any row j , one can imagine that the m ntries of row j compete independently with probabilities { x ji } i ∈ [ m ] to receive the resource in the contention resolu-tion scheme. Let us consider an arbitrary column i . UsingLemma 2 for row j , and given i ∈ A j , the probabilitythat item j is given to player i is at least − e , that is P { ˆ x ji = 1 | x θ i ji = 1 } ≥ − e , ∀ j ∈ [ n ] . Now consider any k = j and note that P { ˆ x ji = 1 | x θ i ji = 1 , x θ i ki = 1 } = P { ˆ x ji =1 | x θ i ji = 1 } , as the event { ˆ x ji = 1 | x θ i ji = 1 } is independentof the event { x θ i ki = 1 } . Using union bound, we can write P { ˆ x ji = 1 , ˆ x ki = 1 | x θ i ji = 1 , x θ i ki = 1 } = 1 − P { ˆ x ji = 0 ∪ ˆ x ki = 0 | x θ i ji = 1 , x θ i ki = 1 }≥ − P { ˆ x ji = 0 | x θ i ji = 1 , x θ i ki = 1 }− P { ˆ x ki = 0 | x θ i ji = 1 , x θ i ki = 1 } = P { ˆ x ji = 1 | x θ i ji = 1 , x θ i ki = 1 } + P { ˆ x ki = 1 | x θ i ji = 1 , x θ i ki = 1 }− P { ˆ x ji = 1 | x θ i ji = 1 } + P { ˆ x ki = 1 | x θ i ki = 1 }− ≥ − e > . Since column i was chosen arbitrarily, this means that forany i ∈ [ m ] and any two k = j ∈ [ n ] , we have P { ˆ x ji = 1 , ˆ x ki = 1 } ≥ P { x θ i ji = 1 , x θ i ki = 1 } = min { x ji , x ki } , where in the first inequality we used the fact that the event { ˆ x ji = 1 , ˆ x ki = 1 } is a subset of the event { x θ i ji = 1 , x θ i ki =1 } . By using Jensen’s inequality, we can lower-bound theexpected value of the rounded solution as X i,j E (cid:2) f ij (cid:0) X k a ijk ˆ x ki ˆ x ji (cid:1)(cid:3) = X i,j E θ i h E (cid:2) f ij (cid:0) X k a ijk ˆ x ki ˆ x ji (cid:1) | x θ i i (cid:3)i ≥ X i,j E θ i h f ij (cid:0) E (cid:2) X k a ijk ˆ x ki ˆ x ji | x θ i i (cid:3)(cid:1)i = X i,j E θ i h f ij (cid:0) X k a ijk E [ˆ x ki ˆ x ji | x θ i i ] (cid:1)i , (13)where the inner expectation in the first equality is withrespect to θ − i = ( θ i ′ , i ′ = i ) and the randomness introducedby the contention resolution in the second phase. Let {·} denote the indicator function. Then, for any i, j, k , we have E [ˆ x ki ˆ x ji | x θ i i ] = E [ˆ x ki ˆ x ji | x θ i ji , x θ i ki ]= E [ˆ x ki ˆ x ji | x θ i ji = 1 , x θ i ki = 1] · { x θiij =1 ,x θiik =1 } = P { ˆ x ki = 1 , ˆ x ji = 1 | x θ i ji = 1 , x θ i ki = 1 } · { x θiji =1 ,x θiki =1 } ≥ · { x θiji =1 ,x θiki =1 } = 14 x θ i ji x θ i ki . Substituting the above relation into (13), and using themonotonicity of f ij together with Definition 2, we can write E h X i,j f ij (cid:0) X k a ijk ˆ x ki ˆ x ji (cid:1)i ≥ X i,j E θ i h f ij (cid:0) X k a ijk x θ i ji x θ i ki (cid:1)i ≥ Γ X i E θ i h X j f ij (cid:0) X k a ijk x θ i ji x θ i ki (cid:1)i = Γ X i E θ i h f i ( x θ i i ) i = Γ X i f Li ( x i ) . VI. M
AXIMIZING S OCIAL W ELFARE WITH N ONPOSITIVE L INEAR E XTERNALITIES
In this section, we consider maximizing the social welfarewith nonpositive externalities and provide a constant factorapproximation algorithm by reducing that problem to sub-modular maximization subject to a matroid constraint. Forthe special case of pairwise nonpositive externalities, such areduction answers a question posed in [16, Section 7.11].Let us again consider the social welfare maximizationwith linear externality functions, except that the influenceweights between any distinct pair of agents are nonpositive,i.e., a ijk ≤ , ∀ i ∈ [ m ] , ∀ j = k ∈ [ n ] . In other words, in theoptimization problem (4) with f i ( x i ) = x ′ i A i x i , we assumethat off-diagonal entries of all the matrices A i , i ∈ [ m ] are nonpositive. However, to assure that the maximizationproblem from the lens of approximation algorithm is well-defined, we assume that the diagonal entries of A i , i ∈ [ m ] are positive such that for any feasible assignment of items tothe agents the objective value in (4) is nonnegative. Other-wise, the maximization problem may have a negative optimalvalue, hence hindering the existence of an approximationalgorithm. This assumption is also meaningful from thepractical point of view as it implies that agents always receivepositive utilities by receiving an item. However, because ofnegative externalities, an agent’s utility decreases as moreagents receive the same item.Since for any i we have ∂f i ( x i ) ∂x ji ∂x ki = a ijk + a ikj ≤ , ∀ j = k ,the function f i is a submodular function. However, we notethat f i may not be a monotone function due to the positive di-agonal entries. Moreover, since from the assumption, agentsderive positive utility by receiving an item (even thoughsmall), f i is a nonnegative function. The following theoremshows that for any nonnegative submodular functions f i (and in particular for f i ( x i ) = x ′ i A i x i ), one can obtaina constant approximation algorithm for the maximum socialwelfare problem by first solving its multilinear relaxationapproximately and then rounding that fractional solutionusing a simple randomized rounding. Theorem 4:
The social welfare problem (1) with nonposi-tive linear externalities admits an e -approximation algorithm. Proof:
Let us consider the multilinear relaxation max n m X i =1 f Mi ( x i ) : m X i =1 x i ≤ , x i ≥ , ∀ i ∈ [ m ] o , (14)where we note that f ( x ) = P i f i ( x i ) is a nonnegative andsubmodular function. The polytope P = { x : P i x i ≤ , x i ≥ } is clearly down-monotone as x ∈ P and ≤ y ≤ x , implies y ∈ P . Moreover, P is a solvablepolytope as it contains only m + n linear constraints and atotal of mn variables. Therefore, using Lemma 1, one canfind, in polynomial time, an approximate solution x ∗ to (14)such that f M ( x ∗ ) ≥ e f ( OP T ) , where OP T denotes theoptimal integral solution to the original IP (4).Next, we can round the approximate solution x ∗ to anintegral one ˆ x by rounding each row of x ∗ independentlyusing the natural probability distribution induced by that row.ore precisely, for each row j (and independently of otherrows), we set the i th entry of row j to with probability x ∗ ji ,and the remaining entries of row j to . Such a rounding setsat most one entry in each row of the rounded solution is 1(as P i x ∗ ji ≤ ). Since the rounding is done independentlyacross the rows, for any column i , the probability that the j th entry is set to is x ∗ ji , which is independent of theother entries in that column. Therefore, ˆ x i represents thecharacteristic vector of a random set R ( x ∗ i ) ⊆ [ n ] , where j ∈ R ( x ∗ i ) independently with probability x ∗ ji . As the objectivefunction is separable across columns, using linearity of theexpectation and the definition of the multilinear extension: E [ f (ˆ x )] = X i E [ f i (ˆ x i )] = X i E (cid:2) f i (cid:0) R ( x ∗ i ) (cid:1)(cid:3) = X i f Mi ( x ∗ i ) = f M ( x ∗ ) ≥ e f ( OP T ) . (cid:4) Remark 2:
As the set of constraints in P define a partitionmatroid, one can obtain the same performance guaranteeusing a more general pipage rounding scheme [29, LemmaB.3]. However, given the simple structure of the polytope P , such a complex rounding is not necessary, and one cansubstantially save in the running time of the algorithm.In fact, the performance guarantee in the Theorem 4 holdseven for more general settings as long as the functions f i are nonnegative and submodular. In particular, if we furtherassume that the influence weight matrices A i , i ∈ [ m ] are diagonally dominant, i.e, P k a ijk ≥ , ∀ i, j , then thesubmodular functions f i ( · ) will also be monotone. In thatcase, using the second part of Lemma 1 one can obtain animproved approximation factor of − e .VII. C ONCLUSIONS
We studied the problem of multi-item social welfare max-imization subject to network effects. We first showed that theproblem could be cast as a multi-agent sub-supermodular op-timization. We then used convex programming together withvarious randomized rounding techniques to devise improvedapproximation algorithms for that problem. In particular, weprovided a unifying and simple method to devise approx-imation algorithms for the multi-item allocation problemusing the rich literature from submodular optimization. Ourprincipled approach not only recovers or improves some ofthe existing algorithms that have been derived in the past inan ad hoc fashion, but it also has the potential to be used fordevising efficient algorithms with additional constraints.R
EFERENCES[1] Z. Cao, X. Chen, X. Hu, and C. Wang, “Pricing in social networks withnegative externalities,” in
International Conference on ComputationalSocial Networks . Springer, 2015, pp. 14–25.[2] N. Haghpanah, N. Immorlica, V. Mirrokni, and K. Munagala, “Optimalauctions with positive network externalities,”
ACM Transactions onEconomics and Computation (TEAC) , vol. 1, no. 2, pp. 1–24, 2013.[3] J. Grossklags, N. Christin, and J. Chuang, “Secure or insure? Agame-theoretic analysis of information security games,” in
Proc. 17thInternational Conference on World Wide Web , 2008, pp. 209–218.[4] S. R. Etesami, “Complexity and approximability of optimal resourceallocation and Nash equilibrium over networks,”
SIAM Journal onOptimization , vol. 30, no. 1, pp. 885–914, 2020. [5] T. Roughgarden,
Selfish Routing and the Price of Anarchy . MITPress, 2005.[6] L. Blumrosen and S. Dobzinski, “Welfare maximization in congestiongames,”
IEEE Journal on Selected Areas in Communications , vol. 25,no. 6, pp. 1224–1236, 2007.[7] R. W. Rosenthal, “A class of games possessing pure-strategy Nashequilibria,”
International Journal of Game Theory , vol. 2, no. 1, pp.65–67, 1973.[8] I. Milchtaich, “Congestion games with player-specific payoff func-tions,”
Games and Economic Behavior , vol. 13, no. 1, pp. 111–124,1996.[9] O. Candogan, K. Bimpikis, and A. Ozdaglar, “Optimal pricing innetworks with externalities,”
Operations Research , vol. 60, no. 4, pp.883–905, 2012.[10] H. Akhlaghpour, M. Ghodsi, N. Haghpanah, V. S. Mirrokni,H. Mahini, and A. Nikzad, “Optimal iterative pricing over socialnetworks,” in
International Workshop on Internet and Network Eco-nomics . Springer, 2010, pp. 415–423.[11] S. Bhattacharya, J. Kulkarni, K. Munagala, and X. Xu, “On allocationswith negative externalities,” in
International Workshop on Internet andNetwork Economics . Springer, Berlin, Heidelberg, 2011, pp. 25–36.[12] B. De Keijzer and G. Sch¨afer, “Finding social optima in congestiongames with positive externalities,” in
European Symposium on Algo-rithms . Springer, 2012, pp. 395–406.[13] A. Bhalgat, S. Gollapudi, and K. Munagala, “Mechanisms and alloca-tions with positive network externalities,” in
Proceedings of the 13thACM Conference on Electronic Commerce , 2012, pp. 179–196.[14] M. Feldman, D. Kempe, B. Lucier, and R. Paes Leme, “Pricingpublic goods for private sale,” in
Proceedings of the Fourteenth ACMConference on Electronic Commerce , 2013, pp. 417–434.[15] D. Chakrabarty, A. Mehta, and V. Nagarajan, “Fairness and optimalityin congestion games,” in
Proceedings of the 6th ACM Conference onElectronic Commerce , 2005, pp. 52–57.[16] B. deKeijzer, “Externalities and cooperation in algorithmic gametheory,”
Ph.D. Thesis, Vrije Universiteit Amsterdam , 2014.[17] B. Lehmann, D. Lehmann, and N. Nisan, “Combinatorial auctionswith decreasing marginal utilities,”
Games and Economic Behavior ,vol. 55, no. 2, pp. 270–296, 2006.[18] L. Blumrosen and N. Nisan, “Combinatorial auctions,”
AlgorithmicGame Theory , vol. 267, p. 300, 2007.[19] C. Chekuri, J. Vondr´ak, and R. Zenklusen, “Submodular functionmaximization via the multilinear relaxation and contention resolutionschemes,”
SIAM Journal on Computing , vol. 43, no. 6, pp. 1831–1879,2014.[20] G. Calinescu, C. Chekuri, M. P´al, and J. Vondr´ak, “Maximizing a sub-modular set function subject to a matroid constraint,” in
InternationalConference on Integer Programming and Combinatorial Optimization .Springer, 2007, pp. 182–196.[21] C. Chekuri and A. Ene, “Submodular cost allocation problem andapplications,” in
International Colloquium on Automata, Languages,and Programming . Springer, 2011, pp. 354–366.[22] A. Ene and J. Vondr´ak, “Hardness of submodular cost allocation: Lat-tice matching and a simplex coloring conjecture,” in
Approximation,Randomization, and Combinatorial Optimization . Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2014.[23] R. Santiago and F. B. Shepherd, “Multi-agent submodular optimiza-tion,” arXiv preprint arXiv:1803.03767 , 2018.[24] ——, “Multivariate submodular optimization,” in
International Con-ference on Machine Learning . PMLR, 2019, pp. 5599–5609.[25] L. Lov´asz, “Submodular functions and convexity,” in
MathematicalProgramming: The State of the Art . Springer, 1983, pp. 235–257.[26] M. Feldman, J. Naor, and R. Schwartz, “A unified continuous greedyalgorithm for submodular maximization,” in . IEEE, 2011, pp.570–579.[27] U. Feige and J. Vondr´ak, “The submodular welfare problem withdemand queries,”
Theory of Computing , vol. 6, no. 1, pp. 247–290,2010.[28] J. Kleinberg and E. Tardos, “Approximation algorithms for classifica-tion problems with pairwise relationships: Metric labeling and Markovrandom fields,”
Journal of the ACM (JACM) , vol. 49, no. 5, pp. 616–639, 2002.[29] J. Vondr´ak, “Symmetry and approximability of submodular maximiza-tion problems,”