aa r X i v : . [ ec on . T H ] A ug An Optimal Distributionally Robust Auction ∗ Alex Suzdaltsev † August 27, 2020
Abstract
An indivisible object may be sold to one of n agents who know their valuationsof the object. The seller would like to use a revenue-maximizing mechanism buther knowledge of the valuations’ distribution is scarce: she knows only the means(which may be different) and an upper bound for valuations. Valuations may becorrelated.Using a constructive approach based on duality, we prove that a mechanismthat maximizes the worst-case expected revenue among all deterministic dominant-strategy incentive compatible, ex post individually rational mechanisms is such thatthe object should be awarded to the agent with the highest linear score provided itis nonnegative. Linear scores are bidder-specific linear functions of bids. The set ofoptimal mechanisms includes other mechanisms but all those have to be close to theoptimal linear score auction in a certain sense. When means are high, all optimalmechanisms share the linearity property. Second-price auction without a reserve isan optimal mechanism when the number of symmetric bidders is sufficiently high. Keywords: Robust Mechanism Design, Worst-case objective, Auctions, Moments prob-lems, Duality ∗ This paper is based on Chapter I of my PhD thesis at Stanford University. I would like to thank DanIancu, Gabriel Carroll, Andy Skrzypacz, Ilya Segal, Bob Wilson, Dmitry Arkhangelsky, Pavel Krivenko,Alex Bloedel and audience members at 2020 Conference on Mechanism and Institution Design for helpfulcomments and encouragement. † Higher School of Economics, Saint Petersburg, Russia, [email protected] Introduction
One of the most basic problems in mechanism design is allocation of an item among n buyers by a revenue-maximizing seller. The classical solutions due to Myerson (1981)and Cremer and McLean (1988) have been obtained under the assumption of expectedrevenue maximization, where the expectation is taken with respect to a fixed distributionof bidders’ values. This distribution is assumed either to (1) be objectively known by theseller or (2) represent her subjective beliefs.While this assumption has led to important insights and does constitute a reasonableapproximation to reality in certain cases, it may be less plausible in other situations. Forexample, a new good may be for sale, or a seller might not be a subjective expected utilitymaximizer. Even if the good is not new, there may exist purely statistical problems withnon-parametric estimation of density functions, especially multiple-dimensional. On theother hand, even if a seller acts as a subjective expected utility maximizer in day-to-day“intuitive” decision problems that do not require explicit numerical articulation of be-liefs, implementing a classical optimal auction requires first writing down a mathematicalformula expressing the beliefs, and this may be infeasible.Motivated by the above observations, in this paper we treat the distribution as un-known and consider a problem of finding an optimal distributionally robust mechanism,i.e. a mechanism that provides the best worst-case expected revenue where the worstcase is over all joint distributions of values lying in a large class. (The values are privateand known by the bidders.) The class of distributions consists of all distributions witha known vector of means and known bounds for support. One may think of the knownvector of means and bounds for support as the best sparse educated guess that the sellerhas about the joint distribution of values, incorporating her beliefs or knowledge aboutthe possible asymmetry of the buyers.We restrict attention to deterministic ex post mechanisms, with ex post meaningdominant-strategy incentive compatible and ex post individually rational, a term usedby Segal (2003). The motivation for this property is standard: one would like, in linewith Wilson doctrine (Wilson, 1987), to make a mechanism robust to misspecification ofbidders’ beliefs . This is even more important in our setting where the seller is concernedabout the misspecification of her own prior and thus does not have a natural guess for thebidders’ common prior, if one exists. The restriction to deterministic mechanisms may be Chung and Ely (2007) give a maxmin foundation to dominant-strategy incentive compatibility underthe assumption of known regular value distribution. We do not know whether an analogous foundationholds in our setting. .The main result of the paper says that the considered maxmin problem admits asimple solution that we call a linear score auction (LSA). Its defining feature is thatthe winner of the auction is the bidder whose linear score is the highest, provided it isnonnegative. Linear scores are bidder-specific linear functions of bids . Transfers arepinned down by incentive compatibility in the standard way. Note that this mechanismcan be regarded as a linear version of the Myersonian optimal auction: indeed, thatauction under asymmetric value distributions is effectively a (generically) nonlinear scoreauction in which the scores equal to bidders’ ironed virtual values. The difference betweenMyersonian optimal auction and linear score auction is illustrated in figure 1. Also notethat when all score functions are identical, the linear score auction reduces to the usualsecond-price auction with a reserve price. Thus, the distributional robustness of thiscommon auction format be an additional rationale for using it in practice.The proof of the result uses strong linear programming duality. To get a duality-free intuition for why the result is true, consider the following simple example. Supposethere are two bidders with valuations lying in [0 ,
1] and expected valuations of m and m satisfying m + m ≥
1. Consider the simple second-price auction without reserve,which is one feasible linear score auction. Consider any distribution F satisfying theabove constraints. Under any profile of values ( v , v ) ∈ supp( F ) with v > v the firstbidder wins, and the seller gets v . Note, however, that if Nature transfers probabilitymass from ( v , v ) to (1 , v ) the seller’s revenue does not change, but E ( v ) increases. Because we assume that the values are known by the bidders, there are no informational spilloversin the sequential mechanism, so one may model it as a one-shot game. Here, we use the term “linear” in the calculus sense of the word, i.e. the functions are, in fact,possibly affine. Our score auction has nothing to do with scoring auctions commonly used in procurement (see, e.g.,Che (1993)). In procurement scoring auctions, a score combines information about a bidder’s costs andthe quality of her good. Here, a score combines information about a bidder’s valuation and the priorinformation on that valuation’s distribution. v vv v v vv v > v . Thus, for any distribution F there exists adistribution ˆ F satisfying E ˆ F max { v , v } = 1, E ˆ F v i ≥ m i , such that the seller’s revenue isthe same. But then, after moving from F to ˆ F , Nature can redistribute mass within theset { v : max { v , v } = 1 } so that the means are lowered back to m i and the seller’srevenue decreases. (This is possible exactly when m + m ≥ F satisfying E F max { v , v } = 1. Call such distributions “potentially worst-case distributions”. Theabove argument is valid for any feasible mechanism that always allocates the good. Butthen, because of the identity E F (min { v , v } ) + E F (max { v , v } ) ≡ m + m , the expected revenue of the SPA under a potentially worst-case distribution is equal to E F (min { v , v } ) = m + m −
1. That is, when seller uses the SPA (and not somemechanism with nonlinearities that always allocates the good), her expected revenuedepends on a (potentially worst-case) distribution F only through the known means m i .This is a clear sign of robustness .The actual proof starts with an observation that the worst-case expected revenue is The analysis in this example does not prove that the SPA is optimal – when means are different, itis not, even among the mechanisms that always allocate the good. It merely shows why mechanisms of aspecific form, of which SPA is an example, yield robustness and thus are good candidates for an optimalmechanism. convex closure of a mechanism’s transfer function, evaluated at the point m where m is the vector of means. Using an analytic representation of convex closuresafforded by a classic strong duality result, we then show that for any feasible mechanismthere exists a linear score auction that does no worse. This construction is the heart ofthe paper. While one can ascertain the validity of the construction visually in the caseof two bidders, the proof is trickier in higher dimensions. The key to the construction isthat the vector of (generalized) “reserve prices” for the desired revenue-improving linearscore auction comes from a fixed point of a certain piecewise-linear map derived fromthe original mechanism. The existence of such a fixed point is guaranteed by Brouwertheorem. However, not any fixed point can yield the desired construction. Thus, analyzingthe structure of the set of fixed points constitutes a major part of analysis.Solving a finite-dimensional optimization problem, we then characterize the set ofoptimal parameters for the LSA. Qualitatively, the optimal auction is similar to the My-ersonian solution: the optimal auction discriminates against stronger bidders (those withlarger means) to reduce their information rents and intensify competition among all bid-ders. This is manifested in the fact that for a fixed bid b , the score of a stronger bidderis lower than the score for a weaker bidder. However, there are new features. First,there may be multiple optimal vectors of parameters. This a common scenario in maxminoptimization problems. Second, when bidders are symmetric, the set of optimal vectorsof reserve prices depends on n : it decreases in n in strong set order and includes zerofor all sufficiently high n . This contrasts with the Myersonian solution where the reserveprice does not depend on n with symmetric bidders. This may be interpreted as evidencethat competition is a more powerful device to protect against low-revenue distributionsthan a reserve price, so the latter is no longer needed when the number of bidders issufficiently large. Finally, when bidders are sufficiently a symmetric, a new phenomenonof weak exclusion arises: it may be weakly optimal to exclude a (low-mean) bidder froman auction completely. This may never happen in the classical model when value distri-butions have overlapping supports. This is because the weak bidder’s value is expectedto be so low that including the bidder does not change the worst-case expected revenue.Interestingly, weak exclusion may happen only if n ≥ some optimal mechanism takes the form of linearscore auction. However, as alluded to above, in general many mechanisms can share thesame worst-case expected revenue. In section 6, we characterize the whole set of optimalmechanisms for the case of two bidders. We show that a mechanism is optimal if and only5f it is “sufficiently close” to the optimal linear score auction in a certain sense. Whenmeans are sufficiently high, the “safe neighborhood” of the optimal LSA collapses in away that any optimal mechanism must coincide with the LSA in an area where the valuesare relatively high. This paper contributes to the growing literature on robust mechanism design. The closestcontributions to ours are Carrasco et al. (2018a), Ko¸cyi˘git et al. (2020), Neeman (2003),Che (2019) and Suzdaltsev (2020). Carrasco et al. (2018a) study the problem of sellingthe good to a single agent by seller who maximizes worst-case expected revenue whileknowing the first N moments of distribution. They characterize the optimal randomizedmechanism, and also the optimal deterministic posted price for the case when a meanand upper bound is known, as in our paper. Their approach also effectively uses duality.Ko¸cyi˘git et al. (2020) study, among other settings, second-price auctions with reserve pricewhen there are n symmetric bidders with a known lower bound for (the common) E ( v i ).They characterize the optimal reserve price but do not study the question of whether theSPA is an optimal deterministic mechanism. They also show that randomized mechanismsyield strictly more revenue in this setting. Che (2019) finds an optimal randomized reserveprice in a second-price auction when the set of joint value distributions is the same asin this paper. Neeman (2003) finds the optimal reserve price in a similar setting wherethe set of distributions is the same as in Ko¸cyi˘git et al. (2020), but the criterion is theworst-case ratio of expected revenue to expected full surplus, rather than expected revenueitself. Finally, Suzdaltsev (2020) proves that a maxmin reserve price in a second-priceauction is equal to seller’s valuation when the values are iid and the mean, and an upperbound on either values or variance is known.Bose et al. (2006) show that when the seller has an arbitrary set of iid priors thatincludes the bidders’ common prior, an auction that equalizes ex post revenue is op-timal. However, to implement such an auction, the seller still has to know the bid-ders’ common prior, as the paper assumes Bayesian, rather than dominant-strategy,implementation. The paper also considers the case of maxmin optimization on thepart of bidders. Azar and Micali (2013) find a maxmin-optimal posted price mecha-nism (among posted price mechanisms only) for the case where there is an unlimitedsupply of the good and mean and variance of every bidder’s marginal value distributionare known. Bergemann and Schlag (2011) and Bergemann and Schlag (2008) consider6inimax-regret pricing under various specifications of incomplete demand knowledge.Carrasco et al. (2018b) study mechanism design where a single agent’s utility is nonlinearin allocation and all distributions with given bounds for support and mean are possible.Auster (2018) considers optimal design for a buyer in a lemons problem where all dis-tributions of seller’s costs are possible. He and Li (2020) seek an optimal reserve pricein an SPA when a (common) marginal distribution of every bidder’s value is known butthe correlation structure is unknown. They find that the optimal deterministic reserveprice goes to zero when the number of bidders grows without bound, which is in accordwith our results. Carroll (2017) studies the problem of multidimensional screening whereagain marginals, but not the correlation structure, are known. Chen et al. (2019) studymultidimensional screening under moment information.Wolitzky (2016) studies optimal mechanisms for bilateral trade when agents are maxminoptimizers, with their uncertainty sets being similar to the one studied in the present pa-per (a buyer knows only the mean of a seller’s valuation and bounds on its support, andvice versa). He also proposes a foundation for such uncertainty sets via agents’ uncer-tainty over each other’s information structures. Unfortunately, such a microfoundationis not directly applicable to the present paper’s set-up because of our assumption thatcorrelation between values can be arbitrary.Another strand of literature has concentrated on establishing the performance boundsof “simple” mechanisms, relative to optimal ones. For example, Azar et al. (2013) show,among other results, that when bidders’ values are independent, distributions’ hazardrates are monotone, and the vector of means m is known, running a second-price auctionwith the vector of individual reserves equal to m ensures no less than e of the optimalMyersonian revenue. Note that this approach implicitly tries to minimize the worst-caserelative regret rather maximize the worst-case performance per se. Other important con-tributions in this vein include Hartline and Roughgarden (2009), Dhangwatnotai et al.(2015) and Allouah and Besbes (2020), Giannakopoulos et al. (2019). Note that thepresent paper shows that the distinction between “simple” and optimal may not existwhen optimality is in the maxmin sense: a relatively simple mechanism (an LSA) isoptimal.A natural idea when type distribution is unknown is to ask the agents about it (see,e.g., Brooks (2013)). We rule out such schemes as we assume that bidders may lack suchinformation themselves. Another approach is to try to infer the distribution of a bidder’svalue from other bidders’ reports and then compute the empirical virtual value functions(Segal, 2003). Such schemes can asymptotically achieve full-distributional-information7evenue They are feasible in the present paper but they turn out not to be (strictly)optimal: intuitively, to run them successfully, the seller has to have some prior knowledgeof the values’ correlation (best if values are conditionally iid) which she lacks by assump-tion. Segal (2003) also characterized an optimal ex post mechanism with correlated privatevalues, which is a generalization of the Myersonian optimal mechanism .A related problem in robust design is finding mechanisms robust to misspecificationof agents’ information structures, rather than the designer’s prior. Brooks and Du (2019)identify an optimal mechanism in the common value setting, while Du (2018) identifies asimple mechanism that asymptotically extracts full surplus. This strand of literature stillassumes a shared common prior between the designer and agents and that bidders’ infor-mation structures are common knowledge among them. The present paper, in contrast,dispenses with both of these assumptions but assumes a rather narrow set of informa-tion structures: the seller knows that every buyer’s information structure is such that sheknows her value. Thus, our approach may be seen as complementary to theirs, albeit onlyindirectly so as our analysis does not apply to the case of interdependent values, which isthe focus of the aforementioned papers.Others kinds of robustness explored in the economics literature include robustness totechnology or preferences, robustness to strategic behavior and robustness to interactionamong agents and are surveyed by Carroll (2018).There also exists a large literature on distributionally robust non-mechanism-designoptimization in operations research, with the case of “known moments” being popu-lar. This literature has been apparently pioneered by an economist (Scarf, 1958). SeeWiesemann et al. (2014), See and Sim (2010), Goh and Sim (2010), Delage and Ye (2010),Popescu (2007) and references therein.Finally, the seller in our model may be seen as ambiguity-averse. Note that our modelconforms to the Gilboa and Schmeidler (1989) model of maxmin expected utility whenone takes the set of all value profiles as the set of states of the world, and mechanisms asacts, because the set of priors we consider is convex. This would fail had we assumed, forinstance, that values are known to be iid but the exact distribution is unknown. See also Loertscher and Marx (2020) for an asymptotically optimal prior-free clock auction. Segal (2003) characterizes the mechanism under a restrictive assumption on conditional virtual values.See, e.g., Papadimitriou and Pierrakos (2011) for a treatment of a more general case. .2 Organization of the paper This paper is organized as follows. We introduce the model in section 2, then set thestage for the proof of the main result in section 3 and present the proof itself in section 4.We then find and present the parametric solutions and discuss comparative statics insection 5. Section 6 discusses the set of optimal mechanisms, section 7 – some extensions.Section 8 concludes. Main missing proofs are stated in the Appendix. Other missingproofs, along with the discussion of worst-case distributions and certain examples, are inthe Online Appendix that one can find at the end of this file.
Notation.
Statements of the form “for all i ” should be interpreted as “for all i ∈{ , . . . , n } ”. Symbols without subscripts may refer to either scalars or vectors. Thisshould not create confusion in most cases. v ′ > v for vectors means that v ′ i > v i for all i ,and similarly for v ′ ≥ v . ı k is a vector of ones of size k . Value vectors are column vectors.Dual variable λ is a row vector.One indivisible object may be sold to one of n ≥ v i , which may be correlated. The seller knows that buyersknow their values; however, she lacks detailed information about the joint distribution F .She only knows that (1) the support of F is contained in [0 , v ] n for some v > R v i dF = m i , i = 1 , . . . , n . Denote by ∆( m, v ) the set of all Borel distributions on [0 , v ] n satisfying these conditions. The seller’s valuation of the object is zero. Denote by V theset of possible vectors of values V ∈ [0 , v ] n and by V − i the set of vectors of values of allbidders except i ( V − i = [0 , n − ).We restrict attention to dominant-strategy incentive compatible and ex post individu-ally rational mechanisms. The definitions of these properties are standard. Further, herewe restrict attention to deterministic mechanisms, i.e. mechanisms such that each biddergets the good with probability 0 or 1 under any reported profile of valuations v . Finally,we apply Revelation principle, and consider only direct mechanisms .Denote a direct deterministic mechanism by M = { x i ( v ) , t i ( v ) } , i = 1 , . . . , n , where x i ( · ) are measurable allocation functions [0 , v ] n → { , } satisfying P ni =1 x i ( v ) ≤ t i ( · ) are measurable transfer functions [0 , v ] n → R .Denote the set of dominant-strategy incentive compatible and ex post individually Revelation principle is valid in our setting for implementation in dominant strategies. M . Denote the seller’s expected revenue givenmechanism M and distribution F by R ( M, F ) := Z n X i =1 t Mi ( v ) dF. Then, the problem that we consider in this paper (and the seller considers herself) issup M ∈M inf F ∈ ∆( m,v ) R ( M, F ) . (1)Denote the value of the inner problem in (1), by R ( M ). That is, R ( M ) is the revenueguarantee of a mechanism M .The main result is that problem (1) admits a solution belonging to a simple class ofmechanisms described below. Definition 1.
Define bidder i ’s linear score s i ( v i ) by s i ( v i ) := β i v i − α i where α i ≥ , β i > are bidder-specific parameters. A linear score auction is a mechanism { x ( v ) , t ( v ) } ∈ M such that for i = 1 , . . . , n : x i ( v ) = , s i ( v i ) > max { max j = i s j ( v j ) , } , s i ( v i ) < max { max j = i s j ( v j ) , } . (2) for some α i , β i ≥ . Note that a linear score auction with α i = r , β i = 1 corresponds to the usual second-price auction with the reserve r . Hence, a linear score auction may be regarded as asimple generalization of the second-price auction that accounts for possible asymmetriesof bidders. Definition 2.
A linear score auction is a corner-hitting linear score auction if and onlyif, for some r ∈ [0 , v ] n , β i = v − r i , α i = r i v − r i if r i < v and β i = 1 , α i = r i if r i = v . The qualifier “corner-hitting” comes from the fact that in a depiction of such an auc-tion’s allocation function, the boundary separating the areas of value space correspondingto different bidders getting the object goes in the corner. Parameters r may be thoughtof as generalized reserve prices (see figure 2).Equivalently, a corner-hitting linear score auction is a linear score auction such thatfor every bidder i , her maximum possible score is either one (when r i < v ) or zero (when r i = v ). 10 v vvr r v v vvr r v v vvr r r (left), and two asymmetric auctions favoring bidder 1 (center andright). Note that the rightmost auction is not a corner-hitting linear score auction, whilethe other two are. Theorem 1.
For every mechanism M ∈ M , there exists a corner-hitting linear scoreauction LSA such that R ( LSA ) ≥ R ( M ) . Proposition 1.
There exists a corner-hitting linear score auction that maximizes therevenue guarantee R ( M ) among all corner-hitting linear score auctions. Corollary 1.
There exists a mechanism that solves (1) and is a corner-hitting linearscore auction.
Note that theorem 1 establishes the form of only one optimal mechanism; in generalthere exist other optimal mechanisms, and there may even be multiple optimal mecha-nisms that are linear score auctions (see section 6 below).A well-known characterization of the set M (given in, e.g., Segal (2003)) is as follows. Lemma 1.
A mechanism { x i ( v ) , t i ( v ) } ∈ M if and only if for each bidder i there exista function p i : [0 , v ] n − → [0 , v ] and a function z i : [0 , v ] n − → R + such that for everyvaluation profile v ∈ [0 , v ] n , x i ( v ) = , v i > p i ( v − i )0 , v i < p i ( v − i ); (3) t i ( v ) = p i ( v − i ) x i ( v ) − z i ( v − i ) . (4)Lemma 1 follows directly from the standard characterization of dominant-strategyincentive-compatibility: monotonicity of the allocation function and the envelope formulathat pins down transfers (up to a constant) as a function of allocation.11econd, revenue maximization in 1 implies that at the optimum z i ( v − i ) ≡ v such that v i = p i ( v − i ) for some i . As we allow for discrete distributions (and the worst-case distributions will turn outto be discrete, see section B), the determination of allocation on this measure-zero set(“tie-breaking rule”) a priori might matter for the expected revenue (although it won’t infact matter for the optimal mechanism).Thus, finding an optimal mechanism in the set M boils down to (1) finding thresholdfunctions p i ( v − i ) such that the bidder i gets the object and pays p i ( v − i ) if his report ishigher than p i ( v − i ); (2) determining the tie-breaking rule.Note that for any corner hitting linear score auction functions p i have a certain simpleform: p i ( v − i ) = r i + ( v − r i ) max (cid:26) , max j ∈ I ( r ) \{ i } v j − r j v − r j (cid:27) ; (5)for some r ∈ [0 , v ] n . where the set I ( r ) is given by { i : r i < v } .In what follows, we will use a shorthand notation LSA ( r ), with some r i possibly equalto v , to refer to a corner-hitting linear score auction in which i ∈ I if and only if r i < v ,and threshold functions are given by (5) for the parameter vector r . First, we reformulate the inner problem in (1) (Nature’s problem) using an appropriatelinear programming duality result. This allows for a more tractable expression reflectingthe dependence of the worst-case expected revenue on the mechanism.A revealing characterization of worst-case revenue under a given mechanism M is asfollows. Denote by conv [ t ] the convex closure of the function t , i.e. for all v ∈ [0 , v ] n , conv [ t ]( v ) := inf { R | ( v, R ) ∈ conv ( graph ( t )) } where conv ( graph ( t )) is the convex hull ofthe graph of t ( · ). Lemma 2.
For any mechanism M , R ( M ) = conv [ t M ]( m ) . (6) Proof:
See Appendix. (cid:3)
The intuition behind the representation (6) is exactly as in the Bayesian persuasionliterature started by Kamenica and Gentzkow (2011): Nature can achieve expected rev-enue R with some feasible distribution if and only if ( m, R ) is in the convex hull of graph12f t M , so the minimal possible revenue is achieved at the lower boundary of the convexhull, similar to how the highest sender’s utility is achieved at the upper boundary ofa convex hull of V ( µ ) at the point µ in Kamenica and Gentzkow (2011). Whereas inKamenica and Gentzkow (2011), the constraint is that µ , the prior belief, has to be themean of posterior beliefs, here the constraint is that m has to be the mean of values. Eventhough the representation (6) is intuitive, it is not yet particularly convenient for solvingthe mechanism design problem. Fortunately, by Fenchel-Moreau-Rockafellar duality the-orem (see Bonnans and Shapiro (2000), theorem 2.113), convex closure of a function canbe equivalently represented as the function’s biconjugate, that is, conv [ t M ]( m ) = sup λ ∈ R n (cid:18) λm + inf v ∈ [0 ,v ] n (cid:8) t M ( v ) − λv (cid:9)(cid:19) (7)for all m ∈ (0 , v ) n .Alternatively, one may have arrived at an expression (7) by considering the dual ofNature’s linear problem directly, and invoking strong duality (Smith, 1995).An important co-result useful in the proof of theorem 1 is that the supremum in (7)is achieved: Lemma 3.
The supremum in (7) is achieved at some λ ∗ ( M ) ∈ R n . Lemma 3 follows from the results in Smith (1995) as the transfer function is boundedfrom below. In the Online Appendix, we provide a direct proof of this lemma.The λ ∗ ( M ) are the optimal dual variables and the Lagrange multipliers on the meanconstraints in Nature’s problem. If unique, λ ∗ may also be interpreted as the local slopeof the convex closure of t M ( v ) at m . Note that we allow for both positive and negativelambda – this stems from our specification of mean constraints as equalities. In the OnlineAppendix, we give examples of mechanisms for which λ ∗ i < i . Note that forsuch mechanisms the seller’s revenue counterintuitively decreases in m i for such i . It is apriori not evident that such mechanisms are suboptimal; however, it will follow from theproof of theorem 1 that they indeed are. In section 7, we discuss an alternative settingwith inequality constraints in which one may restrict λ to be nonnegative a priori.Given a mechanism M , partition [0 , v ] n into n + 1 sets W , W , . . . , W n where W i , i = 1 , . . . , n , are sets of valuations such that bidder i gets the object, and W is the set ofvaluations such that no one gets the object. Then, the value of(7) is equal to λm + min { min i =1 ,...,n inf v ∈ W i ( p ) ( p i ( v − i ) − λv ) , inf v ∈ W ( p ) ( − λv ) } (8)13he sets W i are determined, up to tie-breaking, by the functions p i ( · ). The tie-breakingrule may be specified by disjoint sets O i ⊂ { v : v i = p i ( v − i ) } such that W i = { v : v i >p i ( v − i ) } ∪ O i , i = 1 , . . . , n . If the indifference is between granting the object to a bidderand not granting it at all, it is always harmless to grant the object to someone, so we letsets O i satisfy ∪ ni =1 O i = ∪ ni =1 { v : v i = p i ( v − i ) } \ ∪ ni =1 { v : v i > p i ( v − i ) } . Note that this alsoimplies that the set W is defined with strict inequalities: W = { v : v i < p i ( v − i ) for all i } . W may be empty.The following lemma shows that (1) one can safely ignore sets O i when optimizingover threshold functions p ; (2) one can extend W i to certain intersecting sets such thatthe infimum (8) does not change. This extension is important for the proof of theorem 1. Lemma 4.
For any λ ∈ R n and any mechanism M ∈ M , the value of (8) is the sameas the value of λm + min { min i =1 ,...,n inf v ∈ W ≥ i ( p ) ( p i ( v − i ) − λv ) , inf v ∈ W ( p ) ( − λv ) } (9) where W ≥ i ( p ) := { v : v i ≥ p i ( v − i ) } , i = 1 , . . . , n . Proof:
See Appendix. (cid:3)
Lemma 4 is not trivial because sets W ≥ i ( p ) are not merely closures of W i . Whereasthe projection of W i on V − i may be a strict subset of V − i , the projection of W ≥ i ( p ) on V − i is the whole V − i = [0 , v ] n − (recall that p i ( v − i ) ≤ v )).As using strong duality converts the inner problem into a maximization problem, onemay collapse the outer and the inner problem into a single maximization problem. Thus,the final reformulation of the problem (1) may be stated asChoose measurable functions p i ( v − i ) : [0 , v ] n − → [0 , v ] and λ ∈ R n to maximize R ( p, λ ) := λm + min { min i =1 ,...,n inf v ∈ W ≥ i ( p ) ( p i ( v − i ) − λv ) , inf v ∈ W ( p ) ( − λv ) } (10)subject to supply constraint: ∀ i = j and ∀ v ∈ [0 , v ] n , v i > p i ( v − i ) implies v j ≤ p j ( v − j ),where W ≥ i ( p ) := { v : v i ≥ p i ( v − i ) } , i = 1 , . . . , n , W = { v : v i < p i ( v − i ) for all i } .In (10), we abuse notation slightly by introducing the functional R ( p, λ ) that playsa central role in the analysis to follow. Note that R ( M ) = sup λ R ( p, λ ) where p are the14hreshold functions representing mechanism M . Also note that even though sup λ R ( p, λ )equals the worst-case expected revenue only for tuples of functions p satisfying the supplyconstraint, R ( p, λ ) is well-defined for all tuples of functions p i ( v − i ) : [0 , v ] n − → [0 , v ], i = 1 , . . . , n . The proof of theorem 1 involves evaluating R ( p, λ ) at a tuple p representingan infeasible mechanism. Suppose we are given a mechanism M ∈ M , represented by threshold functions p .Compute an optimal λ ∗ ∈ R n that maximizes R ( p, λ ) given p (such a lambda exists byLemma 3). We will construct a linear score auction LSA with threshold functions ˆ p such that either R (ˆ p, λ ∗ ) ≥ R ( p, λ ∗ ) (in Grand case I below) or R (ˆ p, λ ∗∗ ) ≥ R ( p, λ ∗∗ ) ≥ R ( p, λ ∗ ) for some λ ∗∗ (in Grand case II ). When lambda is reoptimized for the
LSA ,the value of R (ˆ p, λ ) will be even (weakly) higher.The construction of the dominating LSA depends on λ ∗ and is qualitatively differentdepending on whether λ ∗ has negative components or not. Grand case I. λ ∗ i ≥ i = 1 , . . . , n .Note that for every v − i ∈ [0 , v ] n − , the value profile ( v, v − i ) ∈ W ≥ i ( p ). Given this factand the fact that λ ∗ i ≥
0, for every i inf v ∈ W ≥ i ( p ) ( p i ( v − i ) − λ ∗ v ) = inf v − i ∈ [0 ,v ] n − ( p i ( v − i ) − λ ∗− i v − i − λ ∗ i v ) . Thus, (10) may be simplified further to R ( p, λ ∗ ) = λ ∗ m + min { min i =1 ,...,n inf v − i ∈ [0 ,v ] n − ( p i ( v − i ) − λ ∗− i v − i − λ ∗ i v ) , inf v ∈ W ( p ) ( − λ ∗ v ) } (11) Step 1.
The main idea is to replace each threshold function p i ( · ) with a simplerfunction in such a way that the worst-case revenue (11) does not decrease. To this end,given a function p i ( v − i ) and λ ∗ , compute b i := inf w ∈ [0 ,v ] n − ( p i ( w ) − λ ∗− i w )and then for every v ∈ [0 , v ] n − define˜ p i ( v − i ) := max { λ ∗− i v − i + b i , } . (12)15 p i ( v − i ) may be thought of as the supporting hyperplane to the graph of p i ( · ) with agiven slope λ ∗− i (with an appropriate truncation on the boundary of [0 , v ] n − ). Such ahyperplane exists even if p i ( · ) is not convex because it is not a supporting hyperplane ata given point, but rather a supporting hyperplane with a given slope.We replace each function p i ( · ) with ˜ p i ( · ) . Even though the tuple ˜ p typically violatesthe supply constraint, R (˜ p, λ ∗ ) can still be evaluated. Proposition 2. R (˜ p, λ ∗ ) ≥ R ( p, λ ∗ ) . Proof:
It is sufficient to prove that each inner infimum in (11) is weakly greaterunder ˜ p than under p .First, consider inf v ∈ W ( p ) ( − λv ). Note that one can take w = v − i in the inner minimizationin (12) and hence p i ( · ) ≥ ˜ p i ( · ). Thus, W (˜ p ) ⊆ W ( p ) and inf v ∈ W (˜ p ) ( − λv ) ≥ inf v ∈ W ( p ) ( − λv ).Now we prove that for all i , inf v − i ∈ [0 ,v ] n − ( p i ( v − i ) − λ ∗− i v − i ) = inf v − i ∈ [0 ,v ] n − (˜ p i ( v − i ) − λ ∗− i v − i ).In this equation, LHS ≥ RHS because p i ( · ) ≥ ˜ p i ( · ). However, LHS ≤ RHS as wellbecause by (12), for every v − i , ˜ p i ( v − i ) − λ ∗− i v − i ≥ inf w ∈ [0 ,v ] n − ( p i ( w ) − λ ∗− i w ) = LHS . Hence,
LHS = RHS . (cid:3) v v vv v v vv Figure 3: Step 1. Taking the transformation p → ˜ p for n = 2. On the left displayan arbitrary p ( v ) is in red and an arbitrary p ( v ) is in blue. On the right display,the resulting ˜ p -functions are solid and the initial p -functions are dashed. λ = 0 . λ ≈ . Step 2.
In this step, given ˜ p we find an LSA that gives revenue no less than R (˜ p, λ ∗ )and hence, R ( p, λ ∗ ).To this end, we use a fixed point of the map v → ˜ p ( v ) (The map defined for each i by v i = ˜ p i ( v − i ).) Indeed, this is a continuous map from [0 , v ] n to itself, and thus, by BrouwerTheorem a fixed point exists. Denote the set of fixed points by V ∗ = ∅ .16e proceed by considering three mutually exclusive and exhaustive cases concerningthe structure of V ∗ . Case 1.
There exists an interior fixed point, but there are no fixed points with atleast one of coordinates equal to zero, i.e. V ∗ ∩ (0 , v ) n = ∅ and v ∗ > v ∗ ∈ V ∗ . Case 2.
There exists a fixed point with at least one coordinate equal to zero, i.e. ∃ v ∗ ∈ V ∗ : v ∗ i = 0 for some i ∈ { , . . . , n } . Case 3.
There are no interior fixed points and no fixed points with at least one ofcoordinates equal to zero, i.e. for all v ∗ ∈ V v ∗ > ∃ i : v ∗ i = v . Case 1.
There exists an interior fixed point, but there are no fixed points with atleast one of coordinates equal to zero, i.e. V ∗ ∩ (0 , v ) n = ∅ and v ∗ > v ∗ ∈ V ∗ .Take any interior fixed point v ∗ . It satisfies the equations v i = ˜ p i ( v − i ) for all i . As v ∗ is interior, v ∗ >
0, and thus, by (12), it satisfies the system of linear equations A ( λ ∗ ) v = b, (13)where the matrix A ( λ ) of size n is given by a ij ( λ ) = , i = j ; − λ j , i = j, (14)while b i := inf w ∈ [0 ,v ] n − ( p i ( w ) − λ ∗− i w ), i = 1 , . . . , n .Next Lemma shows that, given the supposition in Case 1, matrix A ( λ ∗ ) should be offull rank. Lemma 5.
If matrix A ( λ ∗ ) is singular and there exists a positive fixed point v ∗ ∈ V ∗ ,then V ∗ also contains a fixed point with a zero coordinate. Proof:
Suppose matrix A is singular. Then it follows from basic linear algebra thatthe set of solutions to the system (13) includes the affine subspace { v ∗ + a ˆ v | a ∈ R } whereˆ v is some nonzero solution to the homogeneous system Av = 0. By subtracting rows of A it is easy to prove that, given λ ∗ i ≥
0, if ˆ v is a solution to Av = 0, all coordinates of ˆ v mustbe of the same sign. Without loss of generality, take the coordinate-wise positive solutionˆ v and consider a = − min i ( v ∗ i / ˆ v i ). Then, ˜ v where ˜ v i = v ∗ i − min i ( v ∗ i / ˆ v i )ˆ v i is a solution to(13) which is nonnegative in all coordinates and equal to zero in at least one coordinate.Also, it is less or equal to v ∗ in every coordinate, so ˜ v ∈ [0 , v ] n . Hence, ˜ v is the desiredelement of V ∗ that has a zero coordinate. (cid:3) A ( λ ∗ ) must be of full rank. Hence, V ∗ ∩ (0 , v ) n is a singleton.Now take the unique point v ∗ ∈ V ∗ ∩ (0 , v ) n and consider LSA ( v ∗ ). Denote by ˆ p thethreshold functions corresponding to LSA ( v ∗ ) (they are given by (5)). We prove that LSA ( v ∗ ) gives revenue no smaller than the the tuple of threshold functions ˜ p . v v vv v ∗ v ∗ v v vv v ∗ v ∗ Figure 4: Step 2. Taking the transformation ˜ p → ˆ p (LSA( v ∗ )) for n = 2. On the leftdisplay the fixed point v ∗ of the map v → ˜ p ( v ) is identified. On the right display, theauction LSA ( v ∗ ) is shown. Proposition 3. R (ˆ p, λ ∗ ) ≥ R (˜ p, λ ∗ ) in Case 1. Proof:
Again, it is sufficient to prove that each inner infimum in (11) is weaklygreater under ˆ p than under ˜ p .First, consider inf v − i ∈ [0 ,v ] n − (˜ p i ( v − i ) − λ ∗− i v − i ). It is sufficient to show that ˆ p ≥ ˜ p at everypoint. To this end, note that by definition of v ∗ and ˜ p (12), ˜ p i ( v − i ) may be rewritten as˜ p i ( v − i ) = max { v ∗ i + λ ∗− i ( v − i − v ∗− i ) , } . (15)Thus, we need to prove the inequalitymax { v ∗ i + λ ∗− i ( v − i − v ∗− i ) , } ≤ v ∗ i + ( v − v ∗ i ) max (cid:26) , max j = i v j − v ∗ j v − v ∗ j (cid:27) (16)where the right-hand side is the expression for ˆ p . Note that because v ∗ ∈ (0 , v ) n , nobidder is excluded from the auction LSA( v ∗ ).We will prove a slightly more general lemma that will be also of help later.18 emma 6. Suppose χ ∗− i ∈ R n − and v ∗ ∈ [0 , v ] n are such that v ∗ i + χ ∗− i ( v ı n − − v ∗− i ) ≤ v for all i . Denote by I ( v ∗ ) the set of indices i such that v ∗ i < v . Then, for all i and for all v − i ∈ [0 , v ] n − , max { v ∗ i + χ ∗− i ( v − i − v ∗− i ) , } ≤ v ∗ i + ( v − v ∗ i ) max (cid:26) , max j ∈ I ( v ∗ ) \{ i } v j − v ∗ j v − v ∗ j (cid:27) . (17) Proof:
First, note that if the LHS of (17) is 0, the inequality holds.Suppose it is not zero. Then note that v ∗ i + χ ∗− i ( v − i − v ∗− i ) ≤ v ∗ i + X j ∈ I ( v ∗ ) \{ i } χ ∗ j ( v j − v ∗ j ) (18)because χ ∗ j ( v j − v ∗ j ) ≤ j / ∈ I ( v ∗ ).Now consider an auxiliary linear optimization problem in | I ( v ∗ ) | − χ j ≥ v ∗ i + X j ∈ I ( v ∗ ) \{ i } χ j ( v j − v ∗ j ) (19)s.t. v ∗ i + X j ∈ I ( v ∗ ) \{ i } χ j ( v − v ∗ j ) ≤ v (20)Whenever ( v j − v ∗ j ) > j ∈ I ( v ∗ ) \ { i } , the problem (19) is solved byputting all weight on the variable with the highest “bang for the buck”, i.e. χ j = v − v ∗ i v − v ∗ j for j ∈ arg max j ∈ I ( v ∗ ) \{ i } v j − v ∗ j v − v ∗ j and χ j = 0 for other j . If ( v j − v ∗ j ) ≤ j , one should set χ j = 0 for all j . Hence, themaximum value of problem (19) is equal precisely to the RHS of (17). On the other hand, χ ∗ is feasible for the problem (19) as (20) holds for χ ∗ due to the lemma’s supposition,and the value of the objective function at χ ∗ is precisely the RHS of (18). (cid:3) Now note that the supposition of lemma 6 holds for χ ∗ = λ ∗ and the unique v ∗ ∈ V ∗ due to the fact that ˜ p i ( v ı n − ) ≤ v . Also, I ( v ∗ ) = { , , . . . , n } . Thus, applying lemma 6to χ ∗ = λ ∗ and v ∗ yields the desired inequality (16).Second, consider inf v ∈ W ( p ) ( − λ ∗ v ). We will prove that inf v ∈ W (ˆ p ) ( − λ ∗ v ) = inf v ∈ W (˜ p ) ( − λ ∗ v ).19ecause, as we proved above, ˆ p ≥ ˜ p , W (˜ p ) ⊆ W (ˆ p ) and hence, LHS ≤ RHS . Notethat
LHS = − λ ∗ v ∗ by the construction of LSA ( v ∗ ) and positivity of λ . To prove that LHS ≥ RHS , we prove that v ∗ is a limit point of the set W (˜ p ).Consider again the system (13). Because v ∗ is the unique solution to it, matrix A must be of full rank, and so A − exists. Consider the sequence v k = A − ( b − ı n /k ) whereı n is a vector of all ones of dimension n . v k may not lie in (0 , v ) n for all k . However, v k approaches v ∗ . Hence, it lies, after some k , in (0 , v ) n and, as Av k < b coordinate-wise, in W (˜ p ). (cid:3) Case 2.
There exists a fixed point with at least one coordinate equal to zero, i.e. ∃ v ∗ ∈ V ∗ : v ∗ i = 0 for some i .In this case, we will again prove that LSA( v ∗ ) (again, call the respective thresholdfunctions ˆ p ) provides a weakly higher revenue than ˜ p . The major novelty in this caseis that because v ∗ i = 0, v ∗ may not satisfy the system (13) (recall that v ∗ i = ˜ p i ( v ∗− i ) =max { λ ∗− i v ∗− i + b i , } and so we only have that v ∗ i ≥ λ ∗− i v ∗− i + b i if v ∗ i = 0). Due to this, therepresentation (15) won’t hold for ˜ p and one is not able to apply Lemma 6 directly.To deal with this issue, we introduce auxiliary functions p auxi ( v − i ) := max { v ∗ i + k i λ ∗− i ( v − i − v ∗− i ) , } , (21)where k i are individual-specific coefficients set in a way that the functions p auxi satisfyconditions of Lemma 6, so ˆ p ≥ p aux , and simultaneously ensure that p aux ≥ ˜ p .That is, we prove the inequality ˆ p ≥ ˜ p not directly, as in Case 1, but, if necessary, viathe chain ˆ p ≥ p aux ≥ ˜ p . Proposition 4. R (ˆ p, λ ∗ ) ≥ R (˜ p, λ ∗ ) in Case 2. Proof:
See Appendix (cid:3)
Case 3.
There are no interior fixed points and no fixed points with at least one ofcoordinates equal to zero, i.e. for all v ∗ ∈ V ∃ i : v ∗ i = v , and v ∗ > v ∈ W ( p ) ( − λ ∗ v ) forˆ p and ˜ p . On the other hand, W (ˆ p ) = ∅ as v ∗ >
0, so the comparison via (38) cannot bemade. On the other hand, as v ∗ is not interior, a sequence approaching v ∗ and lying in W (˜ p ) might not exist, so the argument made in Case 1 may not be valid.Whether or not v ∗ is a limit point of the set W (˜ p ) now depends on the geometry of20he ˜ p i functions in a neighborhood of the point, which is governed by the properties ofthe matrix A ( λ ∗ ).In two dimensions, this is illustrated in figure 5. v v vv v v vvw ∗ Figure 5: Two types of geometry of threshold functions around the fixed point ( v, v ). Inboth pictures, ˜ p ( v ) is in red and ˜ p ( v ) is in blue. In the left picture, det( A ) > A ) <
0. Note that in the right picture there exists another fixedpoint w ∗ with w ∗ = 0.It turns out that in general the type of geometry is determined by the sign of det( A ).Fortunately, det( A ) can be computed explicitly. The formula, along with other propertiesof A to be used, is given in the following lemma. Lemma 7.
For a matrix A ( λ ) , given by (14) , with λ ≥ , the following statements hold:1. det( A ) = (cid:18) − n P i =1 λ i λ i (cid:19) n Q i =1 (1 + λ i ) .2. If det( A ) > , all elements of A − are nonnegative.3. If v ∗ is the unique solution to Av = b , then λv ∗ = n P i =1 λi λi b i − n P i =1 λi λi .
4. The rank of A is at least n − . Now consider two subcases separately.
Subcase 1. n P i =1 λ ∗ i λ ∗ i < A ) > p i exhibit the same type of geometry as in figure 5,left. 21n this case, we show that, as usual, LSA ( v ∗ ) dominates ˜ p (one can take any v ∗ ∈ V ∗ ).For bidders with v ∗ i < v , the proof that ˆ p i ≥ ˜ p i is entirely as in Cases 1 and 2 (by Lemma6), since v ∗ >
0. If v ∗ i = v , ˆ p i ≡ v ≥ ˜ p i .Now consider inf v ∈ W ( p ) ( − λ ∗ v ). As in Case 1, inf v ∈ W (ˆ p ) ( − λ ∗ v ) = − λ ∗ v ∗ by the definition ofthreshold functions for the linear score auction. We show that v ∗ is a limit point of theset W (˜ p ) so inf v ∈ W (ˆ p ) ( − λ ∗ v ) ≥ inf v ∈ W (˜ p ) ( − λ ∗ v ).Note that because v ∗ > v ∗ satisfies the system (13) A ( λ ∗ ) v = b . Consider thesequence v k = A − ( b − ı n /k ). ( A − exists as det( A ) > Av k <
1. In this subcase, det( A ) ≤ p i exhibit the same type ofgeometry as in figure 5, right.We will show that in this subcase, there must, along with v ∗ , exist a fixed point w ∗ ∈ V ∗ such that at least one coordinate of w ∗ is equal to zero. Hence, this subcase isincompatible with the premise in Case 3. In two dimensions, the existence of the fixedpoint w ∗ is illustrated in figure 5, right.Suppose that n P i =1 λ ∗ i λ ∗ i = 1. Then det( A ) = 0 and thus the desired fixed point w ∗ existsby Lemma 5.From now on, suppose that n P i =1 λ ∗ i λ ∗ i > w ∗ , one can, in a certainsense, ignore all bidders with v ∗ i < v (where v ∗ is the initial fixed point considered in Case3). Indeed, by the premise of Case 3 there exists bidder j such that v ∗ j = v . Thus, v = ˜ p j ( v ∗− j ) = λ ∗− j v ∗− j + b j ≤ λ ∗− j + b j v ı n − ≤ v, (22)where the first inequality is due to λ ∗ ≥ p j ( v ) ≤ v . Hence, allinequalities hold as equalities and thus λ ∗− j ( v ı n − − v ∗− j ) = 0. But every term in this sumis nonnegative, so we must have λ ∗ i ( v − v ∗ i ) = 0 for all i = j . Thus, for all i with v ∗ i < v we must have λ ∗ i = 0. But this means that the values of all such bidders do not impact ˜ p j for j such that v ∗ j = v . (Call this set of bidders J max again.) Hence, it is sufficient to finda desired fixed point restricted to the set of bidders J max . To see this formally, denote by[0 ˜ w ∗ ] a vector consisting of zeros at coordinates j / ∈ J max and of ˜ w ∗ j for j ∈ J max . Then,22f a point ˜ w ∗ ∈ [0 , v ] | J max | , satisfies the system ˜ w i = ˜ p i ([0 ˜ w ] − i ) for all i ∈ J max , with some˜ w ∗ i = 0, the point w ∗ given by w ∗ j = ˜ w ∗ j , j ∈ J max ;˜ p j ([0 ˜ w ∗ ]) , j / ∈ J max will satisfy the whole system v i = ˜ p i ( v − i ) for all i ∈ { , . . . , n } , with some w ∗ i = 0.Hence, it is with loss of generality to consider v ∗ = v ı n . Since v ı n is a positive fixedpoint, it must satisfy Av = b , so for all i we get b i = (1 − X j = i λ ∗ j ) v. (23)The main idea to construct a fixed point w ∗ with w ∗ i = 0 whenever b i <
0. So wefirst show that such an i is guaranteed to exist in this subcase. This is achieved by thefollowing lemma: Lemma 8.
Suppose λ , . . . , λ n are nonnegative numbers satisfying X i = k λ i ≤ for every k ∈ { , . . . , n } . Then, P ni =1 λ i λ i ≤ . Moreover, if at least one of the inequalities (24) is strict, then P ni =1 λ i λ i < . Indeed, suppose b i ≥ i . Then, by (23), X i = k λ ∗ i ≤ k ∈ { , . . . , n } , so λ ∗ i satisfy the premise of Lemma 8. Hence, P ni =1 λ i λ i ≤ n P i =1 λ ∗ i λ ∗ i >
1. Hence, there exists j such that b j < b ≥ b ≥ . . . ≥ b n .If b ≤
0, 0 · ı n is the desired fixed point. Indeed, then we have 0 = max { b i , } =˜ p i (0ı n − ) for all i . So assume b >
0. Hence, there exists s ∗ ∈ { , , . . . , n − } such that b s ≥ s ≤ s ∗ and b s < s > s ∗ .We will construct a fixed point w ∗ such that w ∗ i = 0 for i > s ∗ . Denote by A restr therestricted matrix formed by first s ∗ rows and columns of A . Denote by b restr the vectorformed by the first s ∗ elements of b . 23ecause, b > b i ≥ i = 2 , . . . , s ∗ , by Lemma 8 we must have s ∗ P i =1 λ ∗ i λ ∗ i < A restr is of full rank.So define w ∗ by w ∗ j = (( A restr ) − b restr ) j , j ≤ s ∗ ;0 , j > s ∗ . Proposition 5. w ∗ ∈ V ∗ and w ∗ i = 0 for some i . Proof:
See Appendix. (cid:3)
We have constructed a fixed point w ∗ ∈ V ∗ with a zero coordinate which shows that Subcase 2 is impossible within
Case 3.
This finishes the consideration of
Grand case I . Grand case II. λ ∗ i < i = 1 , . . . , n .Here, given p and λ ∗ , we pinpoint a pair of another mechanism and a nonegative vector λ ∗∗ which gives a strictly higher R thus reducing the problem to Grand Case I.Denote the set { i | λ ∗ i ≥ } by +( λ ∗ ) and the set { i | λ ∗ i < } by − ( λ ∗ ). Define q := |{ i | λ ∗ i < }| . Redenote threshold functions by p i ( v + , v − ) where the profile of values ofbidders from +( λ ∗ ) is v + and the profile of values of bidders from − ( λ ∗ ) is v − .For all bidders i ∈ +( λ ∗ ), define p newi ( v − i ) := p i ( v + ,
0) and for all bidders i ∈ − ( λ ∗ )define p newi ( v − i ) := v . The transformation works as if bidders from − ( λ ∗ ) are removedfrom the auction. Lemma 9. R ( p new , λ ∗ ) ≥ R ( p, λ ∗ ) . Proof:
Again, we consider separately different infima in (10).First, note that all infima inf v ∈ W ≥ i ( p ) ( p i ( v − i ) − λ ∗ v ) for i ∈ − ( λ ∗ ) are increased as p newi ( v − i ) = v ≥ p i ( v − i ) (the increase will be both due to direct rise in p i and the shrinkageof the set W ≥ i ( p ).)Second, consider inf v ∈ W ≥ i ( p new ) ( p newi ( v − i ) − λ ∗ v ) for i ∈ +( λ ). As before, inf v ∈ W ≥ i ( p ) ( p i ( v − i ) − λv ) = inf v − i ∈ [0 ,v ] n − ( p i ( v + − i , v −− i ) − λ ∗− i v − i − λ ∗ i v ). For p new , this infimum is equal toinf v − i ∈ [0 ,v ] n − ( p i ( v + − i , − λ ∗ + − i v + − i − λ ∗−− i v −− i − λ ∗ i v ) . But note that the minimand in latter infimum is nondecreasing in v −− i , due to the signof λ ∗−− i . Hence, inf v − i ∈ [0 ,v ] n − ( p i ( v + − i , − λ ∗ + − i v + − i − λ ∗−− i v −− i − λ ∗ i v ) = inf v − i ∈ [0 ,v ] k − ( p i ( v + − i , − ∗ + − i v + − i − λ ∗ i v ) =: R . On the other hand, the point ( v + − i ,
0) is feasible in the minimizationin inf v − i ∈ [0 ,v ] n − ( p i ( v + − i , v −− i ) − λ ∗− i v − i − λ ∗ i v ). Hence, this second infimum is not greater thaninf v − i ∈ [0 ,v ] k − ( p i ( v + − i , − λ ∗ + − i v + − i − λ ∗ i v ) = R . Thus, the values of such infima do not decreaseupon replacing p i ( v − i ) with p i ( v + ,
0) = p new .Finally, consider inf v ∈ W ( p new ) ( − λ ∗ v ). We will prove that, inf v ∈ W ( p new ) ( − λ ∗ v ) is no less thanat least one infimum in the expression for R ( p, λ ∗ ), written as in (11), and the statementof this lemma will follow. Note that unlike all previous steps, here we will not show thatinf v ∈ W ( p new ) ( − λ ∗ v ) necessarily dominates a similar infimum, inf v ∈ W ( p ) ( − λ ∗ v ). Rather, we willshow that it is either no less than inf v ∈ W ( p ) ( − λ ∗ v ) or no less than inf v ∈ W ≥ j ( p ) ( p j ( v − j ) − λ ∗ v )for some j .If W ( p new ) is empty, the infimum in the new mechanism is equal to + ∞ and thus istrivially higher than inf v ∈ W ( p ) ( − λ ∗ v ). So suppose W ( p new ) is nonempty. Consider mini-mizing sequences v k ∈ W ( p new ) for the problem inf v ∈ W ( p new ) ( − λ ∗ v ). Note that there existsa minimizing sequence v k such that for all i ∈ − ( λ ∗ ), v ki = 0 for all k , since for all v ∈ W ( p new ), ( v + , ∈ W ( p new ) and λ ∗ i <
0. Consider such a sequence.
Case 1 . v k ∈ W ( p ) starting from some k . This implies that inf v ∈ W ( p ) ( − λ ∗ v ) ≤ lim k →∞ ( − λ ∗ v k ) = inf v ∈ W ( p new ) ( − λ ∗ v ) which implies the Lemma. Case 2 . v k / ∈ W ( p ) for infinitely many k . Thus, as there is only a finite number ofbidders, there exists j ∗ such that v kj ∗ ≥ p j ∗ ( v k − j ∗ ) for infinitely many k . Consider hereafteronly this subsequence. Recall that v k ∈ W ( p new ) for all k . If j ∗ ∈ +( λ ∗ ), we would have p j ∗ ( v k − j ∗ ) = p new ( v k − j ∗ ) > v kj ∗ ≥ p j ∗ ( v k − j ∗ ), a contradiction. So j ∗ ∈ − ( λ ∗ ) and thus for all k v kj ∗ = 0 ≥ p j ∗ ( v k − j ∗ ) ≥
0. So p j ( v k − j ∗ ) = 0 for all k . Moreover, v k ∈ W ≥ j ∗ ( p ) for all k .Hence, inf v ∈ W ≥ j ∗ ( p ) ( p j ( v k − j ∗ ) − λ ∗ v ) ≤ lim k →∞ (0 − λ ∗ v k ) = inf v ∈ W ( p new ) ( − λ ∗ v ) which again impliesthe lemma. (cid:3) Now consider a vector λ ∗∗ obtained by replacing λ ∗ i for i ∈ − ( λ ∗ ) with zeros. Lemma 10. R ( p new , λ ∗∗ ) > R ( p new , λ ∗ ) . Proof:
We will prove that the minimal of all infima in (10) won’t change when λ ∗ i for i ∈ − ( λ ∗ ) are replaced with zeros. Since m i >
0, revenue will strictly increase.Note that the infima except inf v ∈ W ≥ i ( p new ) ( p newi ( v − i ) − λ ∗ v ) for i ∈ − ( λ ∗ ) are minimized bysetting v j = 0 for all j ∈ − ( λ ∗ ). This will continue to hold when λ ∗ i is replaced with λ ∗∗ .Thus, replacing λ ∗ with λ ∗∗ will not change any infimum except inf v ∈ W ≥ i ( p new ) ( p newi ( v − i ) − ∗ v ) = v (1 − P j ∈ +( λ ∗ ) λ ∗ j − λ ∗ i ) for i ∈ − ( λ ∗ ). But note that none of inf v ∈ W ≥ i ( p new ) ( p newi ( v − i ) − λv ), i ∈ − ( λ ∗ ), is strictly minimal of all the infima, for both λ = λ ∗ and λ = λ ∗∗ . Indeed,if +( λ ∗ ) = ∅ , one can take any j ∈ +( λ ∗ ) and consider inf v ∈ W ≥ j ( p new ) ( p newj ( v − j ) − λv ). Sincethe point v s = v, s ∈ +( λ ∗ ) ∪ { i } ;0 , o/w is feasible, and p newj ( · ) ≤ v , this infimum is weaklylower than inf v ∈ W ≥ i ( p new ) ( p newi ( v − i ) − λv ) = v (1 − P j ∈ +( λ ∗ ) λ j − λ i ) for both λ = λ ∗ and λ = λ ∗∗ . If, on the other hand, +( λ ∗ ) = ∅ , inf v ∈ W ( p new ) ( − λv ) is lower, as it is equal to0 < v (1 − λ i ) for both λ = λ ∗ and λ = λ ∗∗ . (cid:3) Along with Lemma 9, we get the inequality R ( p new , λ ∗∗ ) > R ( p, λ ∗ ).Finally, to find an LSA that dominates p new , and thus, p , feed the pair [ p new , λ ∗∗ ] to Grand Case I.
This is possible because all entries of λ ∗∗ are nonnegative. Note thatnow the tilde-transformation will be applied to p new . Also, even though the whole vector λ ∗∗ might not be an optimal one for the mechanism p new , this does not create a problemsince optimality of λ ∗ is not used throughout constructions in Grand Case 1 . (cid:3) The proof of theorem 1 is complete.
Theorem 1 identifies the form of an optimal mechanism. It remains to identify optimalvalues of the parameters r i . To do this, one has to maximize R ( p, λ ) where functions p aregiven by (5) for some r , both with respect to r and λ . Note that it follows from lemma10 that any optimal mechanism M ∗ is such that its optimal Lagrange multiplier λ ∗ ( M ∗ )is nonnegative. Together with proposition 1, this implies that that in finding the optimallinear score auction, one can restrict oneself to λ ≥ r and λ , one has to solve the inner minimization problemsin (8). This is done in the proof of the following lemma: Lemma 11.
Suppose threshold functions p are given by (5) and λ ≥ . Then, R ( p, λ ) = λm + min n v (1 − P λ i ) , min i ( r i − λ − i r − i − λ i v ) , − λr o , r > ,λm + min n v (1 − P λ i ) , min i ( r i − λ − i r − i − λ i v ) o , o/w . (25)The two cases arise because when r i = 0 for some i , W ( p ) = ∅ , so inf W ( p ) ( − λv ) jumpsto + ∞ at all such points. 26ne approach to maximize this function of 2 n variables would be to maximize firstover λ and then over r . This corresponds to finding explicitly the worst-case revenue forevery linear score auction and then finding the best auction. Even though this mightseem a more natural approach, it turns out to be more cumbersome.So we optimize first over r and then over λ instead. Abusing notation, denote therevenue function by R ( r, λ ). The function r → R ( r, λ ) is piecewise-linear and potentiallydiscontinuous at points where r i = 0 for some i . Maximization suggests that at least twoof the arguments of the minimum operator in (11) should be equal to each other at anoptimum, but it is unclear a priori , which two (or more). Fortunately, one can show that(1) when P i λ i λ i ≤
1, all arguments but v (1 − P λ i ), must be equal to each other; (2)the case P i λ i λ i > r and λ . This is stated in the following two lemmas.Given λ ≥
0, define the point r ∗ ( λ ) by r ∗ i ( λ ) := λ i λ i v. Lemma 12. If P i λ i λ i ≤ , r ∗ ( λ ) solves the problem max r R ( r, λ ) . Lemma 13.
Suppose ( λ , r ) is a solution to max r,λ R ( r, λ ) . Then, P i λ i λ i ≤ . In the proof of lemma 12, we build an iterative procedure that at each iteration movesthe price vector r to equalize a growing number of arguments of the minimum operatorin (25), terminating when all but v (1 − P λ i ) are equal to each other, implying that itfinishes at r ∗ ( λ ). Using the condition P i λ i λ i ≤
1, we show that at each step the value of(25) does not decrease. The prices r with r i = 0 for some i are treated in a special way.It follows from lemmas 12, 13 that any optimal Lagrange multiplier λ ∗ solves thefollowing problem: max λ ≥ n X i =1 m i λ i − λ i λ i v (26)s.t. n X i =1 λ i λ i ≤ A ( λ ) is nonnegative. It is no coincidence that this constraint appearsalso in the present part of analysis. Indeed, if this constraint is violated, the transformed27hreshold functions (˜ p ) are mutually located as in figure 5, right. But in this case, pro-ceeding to the revenue-improving LSA improves revenue strictly as threshold functionsare moved strictly up (except at one point), and the object is allocated at every valueprofile. Hence, the initial mechanism (which could be itself an LSA) was suboptimal .Thus, the constraint (27) eliminates situations with the “wrong” implied geometry of ˜ p .We refer to it as “geometry constraint”.The solution to the problem (26) differs depending on whether the geometry constraint( ?? ) binds. If the constraint does not bind, the solution can be easily computed to be λ ∗ i = r vv − m i − . Thus, the geometry constraint binds whenever P i λ ∗ i λ ∗ i ≥
1, that is, when n X i =1 p − m i /v ≤ n − , i.e., when the means are relatively high. Whether this inequality holds will determine twodifferent regimes for the solution of the optimal auction problem.When the geometry constraint (27) binds, the nonnegativity constraints might alsostart to bind for bidders whose means are relatively low (even though the means are highoverall.) The full solution to the problem (26)-(27) is given in the following lemma. Lemma 14.
Enumerate bidders in such a way that m ≤ m ≤ . . . ≤ m n . The solutionto the problem (26) - (27) is as follows:1. If n P i =1 p − m i /v > n − , λ ∗ i = r vv − m i − for all i. (28)
2. If n P i =1 p − m i /v ≤ n − , λ ∗ i = , i < k ∗ ; n P j = k ∗ √ v − m j / ( n − k ∗ ) √ v − m i − , i ≥ k ∗ , (29) This is a geometric intuition behind the (algebraic) proof of Lemma 13. here k ∗ = min (cid:26) k ≤ n − | n P i = k +1 √ v − m i √ v − m k > n − k − (cid:27) . The bidders with λ ∗ i = 0 will be the ones who can be excluded from the auctionwithout loss of worst-case revenue. For a given bundle of parameters ( m, v ), denote theset { i : λ ∗ i = 0 } by W E ( m, v ) – those are bidders that are weakly excluded . Denote thecomplement set of bidders by SI ( m, v ) – those that are strictly included . Note that nomore than n − n = 2.Given the definition of k ∗ , the condition that ensures that no bidder gets weaklyexcluded for n ≥ √ v − m < n − n − n P i =2 √ v − m i n − . That is, the lowest of the means has to be relatively high still.Now, given the optimal λ ∗ stated in lemma 14, we want to recover optimal reserveprices r ∗ for the linear score auction. Recall that by Lemma 12, r ∗ ( λ ) given by r ∗ i = λ i λ i v is an optimal vector of prices. However, there may be others. Indeed, at some of the stepsin the proof of Lemma 12, the inequalities may have been only weak. Carefully tracingthis leads to the following answer. Proposition 6.
The set of optimal generalized reserve prices for the corner-hitting linearscore auction is as follows:1. If n P i =1 p − m i /v > n − (means are low), there is a unique vector of optimal prices r ∗ given by r ∗ i = v − p v ( v − m i ) for all i ; (30)
2. If n P i =1 p − m i /v ≤ n − (means are high), a vector of reserve prices r ∗ ∈ [0 , v ] n isoptimal if and only if it is satisfies the following system P i ∈ SI ( m,v ) r ∗ i ≤ v ; v − r ∗ i v − r ∗ j = q v − m i v − m j for all i, j ∈ SI ( m, v ) , (31) where SI ( m, v ) is the set of bidders for whom the optimal Lagrange multipliers λ ∗ ,as given by (29) , are positive. r i for suchbidders can be set arbitrarily (and the exclusion is achieved when r i = v ). Figure 6,left, depicts areas of parameter space that correspond to low/high means and to thepresence/absence of weak exclusion for a specific example with n = 3. Figure 6, right,shows the set of optimal price vectors for the case of high means if n = 2. m m vv A BC r r vv ml Figure 6: (Left.) Different regimes for the optimal solution depending on the means, n = 3. One bidder’s value has a mean of m and the other two bidders’ values have amean of m . In area A means are low and the vector of optimal reserve prices is unique.In areas B and C means are high and there are multiple solutions. In addition, in area C the bidder with the lowest mean is weakly excluded from the auction. (Right.) The setof optimal price vectors in the high-means case if n = 2, m > m .As noted in the introduction, the solution features discrimination against strongerbidders. Indeed, for any two bidders i, j ∈ SI ( m, v ), m i > m j implies r ∗ i > r ∗ j for anyoptimal price vector r ∗ . This is consistent with the standard result in auction theory(Myerson, 1981). The differences from the classic solution include weak exclusion ofbidders discussed above, multiplicity of solutions, the fact that the set of optimal optimalreserve prices depends on n in the symmetric case.The multiplicity of solutions in the high-means case stems from the fact that manyauctions share the same relevant region of conv [ t M ( v )] and induce the same worst-casedistribution. This distribution is such that the sale always happens (see more on this insection B). Note, however, that the (generalized) reserve prices also control the slope ofthe boundary determining which bidder gets the object, which is important. This is whythe prices must belong to a certain line segment.In the symmetric case, we obtain the following corollary of proposition 6: Corollary 2.
Suppose bidders are symmetric ( m i = m j ). Then, a second-price auction ith reserve price r solves problem (1) if and only if r = v − p v ( v − m ) and n < − √ − m/v or r ∈ [0 , v/n ] and n ≥ − √ − m/v . Thus, the set of optimal reserve prices decreases in strong set order and convergesto point zero as n → ∞ . Thus, as competition increases, a reserve price is no longerneeded to protect the seller from low-revenue distributions. The result that a second-price auction without a reserve is an optimal mechanism for n sufficiently high echoesresults in He and Li (2020), Che (2019) and Suzdaltsev (2020) who show that setting areserve equal to the seller’s own value is either exactly or asymptotically optimal in relatedmaxmin settings.Note also that with symmetric bidders, the seller never sets the reserve above v/n ,even if the mean m is arbitrarily close to v . This contrasts with the classical case in whichthe optimal reserve may be arbitrarily close to v if the distribution is concentrated in asmall neighborhood of v . This is yet another manifestation of the extreme cautiousnessexercised by the seller. The proof of Theorem 1 establishes that a corner-hitting linear score auction is an optimalmechanism in the considered environment. But do there exist other optimal mechanisms?In other words, to what extent is linearity important for the optimality of the mechanism?In this section, we study this aspect of the problem for the case of two bidders. Wecharacterize the whole set of optimal (deterministic) mechanisms. This set is far frombeing a singleton. The features of the set turn out to depend significantly on whether themeans are low ( p − m /v + p − m /v >
1) or high ( p − m /v + p − m /v ≤ R ( M, λ )the revenue under a mechanism M and Lagrange multipliers λ . Let λ ∗ ( M ) be any La-grange multipliers solving the problem (7) (equivalently, max λ R ( M, λ ) for a mechanism M .) Denote by LSA opt any optimal linear score auction. This parametric solution has been obtained by Ko¸cyi˘git et al. (2020). However, they take the auctionformat – SPA –as given. emma 15. If a mechanism M solves problem (1) , any solution λ ∗ ( M ) to the dual ofNature’s problem (7) given M coincides with λ ∗ ( LSA opt ) , given by (28) and (29) , Proof:
Denote by R ∗ the optimal revenue achieved by any mechanism. As M is opti-mal, R ∗ = R ( M , λ ∗ ( M )). By the proof of theorem 1, there exists a linear score auction LSA such that R ( M , λ ∗ ( M )) ≤ R ( LSA , λ ∗ ( M )). Hence, R ( LSA , λ ∗ ( M )) ≥ R ∗ and this LSA has to be, in a fact, an optimal LSA, LSA opt . Thus, λ ∗ ( M ) has to be amaximizer of R ( LSA opt , λ ). Such a maximizer is unique and is given by (28) and (29). (cid:3)
The set of optimal mechanism for n = 2 is characterized in the following two proposi-tions. Proposition 7.
Suppose p − m /v + p − m /v > . Let r ∗ i be the reserve pricesof the optimal LSA given by (30) and λ ∗ i be the optimal Lagrange multipliers given by (28) . Then, a mechanism ( p ( v ) , p ( v )) solves problem (1) if and only if the followingconditions all hold:1. p i ( v − i ) ≥ r ∗ i + λ ∗ i ( v − i − r ∗− i ) for all v − i ∈ [0 , v ] ;2. p i ( v − i ) ≤ (cid:0) λ ∗ r ∗ + λ ∗ r ∗ − λ ∗− i v − i (cid:1) /λ ∗ i for v − i ≤ r ∗− i ;3. p i ( v − i ) are weakly increasing for v − i ≥ r ∗− i .4. p ( v ) and p ( v ) are inverse to each other for v i ≥ r ∗ i whenever possible, i.e. for anyinterval ( v ′ , v ′′ ) , v ′ ≥ r ∗ such that p ( v ) is strictly increasing on it, p ( p ( v )) = v for all v ∈ ( v ′ , v ′′ ) , and similarly for p ( v ) . Proposition 8.
Suppose p − m /v + p − m /v ≤ . Let λ ∗ i be the optimal Lagrangemultipliers given by (29) . Note that λ ∗ λ ∗ = 1 . Let r ∗ i = λ ∗ i / (1+ λ ∗ i ) v be the highest optimalreserve prices for the LSA. Then, a mechanism ( p ( v ) , p ( v )) solves problem (1) if andonly if the following conditions all hold:1. p i ( v − i ) = r ∗ i + λ ∗ i ( v − i − r ∗− i ) for all v − i ≥ r ∗− i ;2. p i ( v − i ) ≥ r ∗ i + λ ∗ i ( v − i − r ∗− i ) for all v − i ∈ [0 , v ] ;3. p i ( v − i ) ≤ (cid:0) λ ∗ r ∗ + λ ∗ r ∗ − λ ∗− i v − i (cid:1) /λ ∗ i for v − i ≤ r ∗− i . In both propositions, sufficiency is easy to prove from the fact that for any mechanismsatisfying the conditions it can be verified that R ( M, λ ∗ ) = R ∗ . Necessity is trickier. Themain idea is that if one submits any optimal mechanism to the proof of theorem 1, the32esulting LSA should be an optimal one, characterized by proposition 6. This means thatone of fixed points of ˜ p , derived from M , must be an optimal vector of prices. Given thatwe know the slopes of ˜ p from lemma 15, this reconstructs ˜ p for any optimal mechanism M , thus yielding condition 1 in proposition 7 (per property ˜ p ≤ p ). Condition 2 in thesame proposition stems from the fact that for any optimal mechanism, inf v ∈ W ( − λ ∗ v ) maynot be lower than that for the optimal LSA. Finally, conditions 3 and 4 in proposition7 stem from the fact that the threshold functions for the optimal mechanism must bothsatisfy supply constraint and be tight to each other, not allowing the “holes” that wouldexpand the set W .In proposition 8, the linearity of threshold functions (condition 1) is just a consequenceof the fact that that reconstructed ˜ p are tight to each other, and thus, per the property˜ p ≤ p and the fact that one shouldn’t expand the set W too much, threshold functionsmust coincide with ˜ p when values are sufficiently high. Because the threshold functionsare pinned down for v i ≥ r ∗ i , one does not have to prove analogs of conditions 3 and 4 ofproposition 7 in proposition 8. v v vvr ∗ r ∗ n = 2, v = 1, m i = 0 . r ∗ i = 0 . λ ∗ i = 2 /
3. The graphs of all optimal threshold functions must lie within the shadedareas. A sample optimal mechanism is shown.The sets of optimal mechanisms in the low-means and high-means case are depictedin figures 7 and 8. We see that in the low means case (figure 7) it is necessary that thegraphs of threshold functions of any optimal mechanism pass exactly through the point( r ∗ , r ∗ ). This signifies the fact that when means are relatively low, the main trade-off thatthe seller faces is the trade-off between not selling the good and the prices upon selling andnot the trade-off between favoring bidder 1 or 2 when both values are above the reserveprices. The existence of the area denoted by “0” shows that it is strictly optimal not33o sell the good sometimes. Note, however, that the threshold functions should be still“close enough” to those of the optimal linear score auction, with the “safe neighborhood”of the optimal linear score auction given by λ ∗ .In the high-means case (figure 8), the “safe neighborhood” collapses so that the anyoptimal mechanism should coincide with the optimal linear score auction for v i ≥ r ∗ i .This stems from the fact that with high means, the probability of not selling the goodwill be small anyway (provided that prices are not very high), so the main trade-off isabout designing the right mode of bidders’ competition. Note also that, as the “0” setcollapses, it becomes weakly optimal to always sell the object. v v vvr ∗ r ∗ Figure 8: Optimal mechanisms in a high-means case. n = 2, v = 1, m = 0 . m = 0 . r ∗ = 3 / r ∗ = 5 / λ ∗ = 1 /λ ∗ = 0 .
6. The graphs of all optimal threshold functions mustcoincide with the thick line within the unshaded area, and must lie above it within theshaded area. The thick line itself represents an optimal mechanism – a linear score auctionthat always allocates the good. Two curves correspond to another mechanism.
In this section, we analyze the case where the seller may put different upper boundson different bidders’ values. For simplicity, we focus on the case of two bidders andassume that the support of the joint distribution F is contained in [0 , v ] × [0 , v ] for some v ≥ v > heorem 2. Suppose v ≥ v . Then there exists a mechanism M ∗ that solves (1) andis a linear score auction with parameters β = γ , α = γr , β = 1 , α = r for some r i ∈ [0 , v i ] and γ ∈ R + . v v v v r r v v v v r r v = v , one may stictly preferthe boundary “hitting the wall” (right picture) rather than “hitting the corner” (left).Digits 1 and 2 denote areas of value space where the corresponding bidders get the object.Zero denotes areas where the object is kept by the seller.The parametric solution is given in the following proposition: Proposition 9.
Suppose v ≥ v . Then,1. If p − m /v + p − m /v > , the optimal prices r ∗ i are unique and given by r ∗ i = v i − p v i ( v i − m i ) , i = 1 , . (32) The optimal slope γ ∗ is not unique; γ ∗ is optimal if and only if γ ∗ ∈ (cid:20) r ∗ v − r ∗ , v − r ∗ r ∗ (cid:21) . (33)
2. If p − m /v + p − m /v ≤ , the optimal slope γ ∗ is unique and given by the(unique) positive solution to the equation v − v ( γ + 1) + v − m γ = v − m . (34) The optimal pair of prices is not unique; ( r ∗ , r ∗ ) is optimal if and only if v − r ∗ ˜ v − r ∗ = γ ∗ r ∗ v + r ∗ v ≤ , here ˜ v = γ ∗ v + v γ ∗ +1 ≤ v is the minimum reported value of bidder 1 such that shewins regardless of the second bidder’s report. The main result and its proof is almost unchanged if the seller puts different lower bounds v i on bidder’s values that can also differ from the seller’s own valuation c (but the upperbound is still the same). Note, however, that in the “gap case” ( c < max i v i ) the paramet-ric solution will be substantially different from the one identified in section 5. As setting r i lower than v i raises the worst-case probability of sale discontinuously to one, the sellerwill sometimes strictly prefer such “sure-sale” linear score auctions to the ones identifiedin the baseline case. The part of the above analysis that fails in this case is lemma 12, asnow discontinuity in (25) starts playing a role. Throughout the analysis, we have maintained the assumption that the mean of the valua-tion vector distribution is known exactly, i.e. the mean constraint is an equality constraint.This assumption has required additional work to rule out negative Lagrange multipliersin
Grand case II. of the proof of the main result; in contrast, had we assumed thatonly a lower bound for the mean is known (as in Ko¸cyi˘git et al. (2020)), we would getthe nonnegativity of λ for free. We think that the equality constraint is a more plausiblemodeling choice under the “educated guess” interpretation of the known mean assump-tion. Note, however, that the inequality constraint may be a better choice when the meaninformation comes from data obtained from previous auctions with the same bidders. Itis well-recognized that in that case bidders, anticipating that their reports will affect thedesign of a future auction, may strategically shade their bids in an otherwise truthfulmechanism . Thus, such data will indeed provide only a lower bound on the valuations’means. It is therefore warranted to articulate the following result. Corollary 3.
The set of optimal mechanisms is the same regardless of whether the sellerknows that E ( v ) = m or E ( v ) ≥ m . Proof:
Recall that under the equality constraint the transformed problem wasto maximize R ( p, λ ) given by (10) over threshold functions p and λ ∈ R n . Under the Kanoria and Nazerzadeh (2017) propose an approximate solution to the incentive problem for thecase of iid values. It is less clear how to alleviate it in the case where bidders are ex-ante asymmetric andmay have correlated values. E ( v ) ≥ m transformed problem becomes maximize R ( p, λ ) over p and λ ∈ R n + .Then the result follows from the fact that any solution ( p ∗ , λ ∗ ) to the original probleminvolves λ ∗ ≥ (cid:3) It might be also interesting to consider the problem in which the seller knows that E ( v i ) ∈ [ m i , m i ] for all i . It follows from our result that λ ∗ ( M ∗ ) ≥ M ∗ in the known means case, that in such a problem the set of optimalmechanisms is exactly the same as the set of optimal mechanisms for the case when themeans are known to be m i . This result is not trivial, as there exist mechanisms underwhich worst-case revenue may be lower for higher m i (see the Online Appendix), and itis nor a priori clear that such mechanisms are suboptimal. In this paper, we presented a solution to a basic distributionally robust mechanism designproblem – the problem of allocating an indivisible good among n buyers in a mannerthat maximizes worst-case expected revenue when the seller knows only means of valuedistributions and an upper bound on their support. The identified solution is simpleand may be thought of as a linear version of the classic solution for the case where valuedistributions are known (Myerson, 1981). The proof is based on strong linear programmingduality, a geometric construction and an analysis of the set of fixed points of a certainpiecewise-affine map. We then solved the parameter-tuning problem and identified tworegimes for the parametric solution. We compared and contrasted the identified solutionto the classic one, and characterized the full set of solutions in the two-bidder case. Thelinearity of the boundary determining the bidder getting the object is indeed necessaryfor optimality if the known means are sufficiently high.The analysis is certainly not free of limitations that are simultaneously avenues forfuture research. Some of those are as follows: • Randomized mechanisms.
It is well-known that randomized solutions performstrictly better in robust optimization problems thanks to their ability to providehedging against various Nature’s strategies. Thus, our restriction to deterministicmechanisms is certainly with loss of revenue. In fact, it follows from our results andtheorem 11 in Ko¸cyi˘git et al. (2020) thatinf m R ∗ det ( m ) R ∗ rand ( m ) = 0 , R ∗ det ( m ) and R ∗ rand ( m ) are the best revenue guarantees of a deterministic andrandomized mechanism when all bidders’ values have the same mean m . In thissense, the loss of revenue can be significant and it is certainly interesting to knowwhat an optimal randomized mechanism is. However, there are two caveats to theabove ratio analysis. First, it follows from the analysis in Ko¸cyi˘git et al. (2020)that the infimum is achieved only when m →
0, but then both R ∗ det ( m ) → R ∗ rand ( m ) → m ( R ∗ rand ( m ) − R ∗ det ( m )) ≈ . v . Thus,if there is a cost of C > . v of using a randomized mechanism (say, due to itscomplexity — and numerical analysis shows that it is quite complex), it will be anoptimal decision for the seller to use a deterministic mechanism regardless of mean m even if randomization is feasible. We expect a similar argument hold for the caseof multiple bidders as well. • Higher moments.
The proof of the main result in the present paper cannot beeasily modified to accommodate higher moments constraints. It follows again fromstrong duality that the natural candidate optimal threshold functions if k momentsare known are polynomials of degree k (see Carrasco et al. (2018a) for the caseof one bidder). Note, however, that using such polynomials on the full domain isvery likely to be infeasible due to the supply constraint. To resolve the conflict anddetermine the winner of the object it may be necessary to draw a boundary that maybe linear still. One natural higher-moment constraint is the nonnegative covarianceconstraint. Because the function v → v v is linear, a similar tilde-transformationcan be used to establish the optimality of a linear score auction under known meanand nonnegative covariance if there are two bidders. However, it is unclear whetherthe result is true for n ≥ p i ( v i ) − ζ v j v k , i = j = k ,in the dual problem). References
Allouah, Amine and Omar Besbes (2020), “Prior-independent optimal auctions.”
Man-agement Science . 38uster, Sarah (2018), “Robust contracting under common value uncertainty.”
TheoreticalEconomics , 13, 175–204.Azar, Pablo, Constantinos Daskalakis, Silvio Micali, and S Matthew Weinberg (2013),“Optimal and efficient parametric auctions.” In
Proceedings of the twenty-fourth annualACM-SIAM symposium on Discrete algorithms , 596–604, Society for Industrial andApplied Mathematics.Azar, Pablo Daniel and Silvio Micali (2013), “Parametric digital auctions.” In
Proceedingsof the 4th conference on Innovations in Theoretical Computer Science , 231–232, ACM.Bergemann, Dirk, Benjamin A Brooks, and Stephen Morris (2016), “Informationally ro-bust optimal auction design.”Bergemann, Dirk and Karl Schlag (2011), “Robust monopoly pricing.”
Journal of Eco-nomic Theory , 146, 2527–2543.Bergemann, Dirk and Karl H Schlag (2008), “Pricing without priors.”
Journal of theEuropean Economic Association , 6, 560–569.Bonnans, J Frederic and Alexander Shapiro (2000),
Perturbation Analysis of OptimizationProblems . Springer Science & Business Media.Bose, Subir, Emre Ozdenoren, and Andreas Pape (2006), “Optimal auctions with ambi-guity.”
Theoretical Economics , 1, 411–438.Brooks, Benjamin A (2013), “Surveying and selling: Belief and surplus extraction inauctions.”
Unpublished manuscript .Brooks, Benjamin A and Songzi Du (2019), “Optimal auction design with common values:An informationally-robust approach.”
Available at SSRN 3137227 .Carrasco, Vinicius, Vitor Farinha Luz, Nenad Kos, Matthias Messner, Paulo Monteiro,and Humberto Moreira (2018a), “Optimal selling mechanisms under moment condi-tions.”
Journal of Economic Theory , 177, 245–279.Carrasco, Vinicius, Vitor Farinha Luz, Paulo K Monteiro, and Humberto Moreira (2018b),“Robust mechanisms: the curvature case.”
Economic Theory , 1–20.Carroll, Gabriel (2017), “Robustness and separation in multidimensional screening.”
Econometrica , 85, 453–488. 39arroll, Gabriel (2018), “Robustness in mechanism design and contracting.”
Annual Re-views .Che, Ethan (2019), “Robust reserve pricing in auctions under mean constraints.”
Availableat SSRN 3488222 .Che, Yeon-Koo (1993), “Design competition through multidimensional auctions.”
TheRAND Journal of Economics , 668–680.Chen, Hongqiao, Ming Hu, and Georgia Perakis (2019), “Distribution-free pricing.”
Avail-able at SSRN 3090002 .Chung, Kim-Sau and Jeffrey C Ely (2007), “Foundations of dominant-strategy mecha-nisms.”
The Review of Economic Studies , 74, 447–476.Cremer, Jacques and Richard P McLean (1988), “Full extraction of the surplus in bayesianand dominant strategy auctions.”
Econometrica: Journal of the Econometric Society ,1247–1257.Delage, Erick and Yinyu Ye (2010), “Distributionally robust optimization under momentuncertainty with application to data-driven problems.”
Operations research , 58, 595–612.Dhangwatnotai, Peerapong, Tim Roughgarden, and Qiqi Yan (2015), “Revenue maxi-mization with a single sample.”
Games and Economic Behavior , 91, 318–333.Du, Songzi (2018), “Robust mechanisms under common valuation.”
Econometrica , 86,1569–1588.Giannakopoulos, Yiannis, Diogo Po¸cas, and Alexandros Tsigonias-Dimitriadis (2019),“Robust revenue maximization under minimal statistical information.” arXiv preprintarXiv:1907.04220 .Gilboa, Itzhak and David Schmeidler (1989), “Maxmin expected utility with non-uniqueprior.”
Journal of Mathematical Economics , 18, 141–153.Goh, Joel and Melvyn Sim (2010), “Distributionally robust optimization and its tractableapproximations.”
Operations research , 58, 902–917.Hartline, Jason D and Tim Roughgarden (2009), “Simple versus optimal mechanisms.”In
Proceedings of the 10th ACM conference on Electronic commerce , 225–234.40e, Wei and Jiangtao Li (2020), “Correlation-robust auction design.”Kamenica, Emir and Matthew Gentzkow (2011), “Bayesian persuasion.”
American Eco-nomic Review , 101, 2590–2615.Kanoria, Yash and Hamid Nazerzadeh (2017), “Dynamic reserve prices for repeated auc-tions: Learning from bids.”
Available at SSRN 2444495 .Ko¸cyi˘git, C¸ a˘gıl, Garud Iyengar, Daniel Kuhn, and Wolfram Wiesemann (2020), “Distri-butionally robust mechanism design.”
Management Science , 66, 159–189.Loertscher, Simon and Leslie M Marx (2020), “Asymptotically optimal prior-free clockauctions.”
Journal of Economic Theory , 105030.Myerson, Roger B (1981), “Optimal auction design.”
Mathematics of operations research ,6, 58–73.Neeman, Zvika (2003), “The effectiveness of english auctions.”
Games and EconomicBehavior , 43, 214–238.Papadimitriou, Christos H and George Pierrakos (2011), “On optimal single-item auc-tions.” In
Proceedings of the forty-third annual ACM symposium on Theory of comput-ing , 119–128.Popescu, Ioana (2007), “Robust mean-covariance solutions for stochastic optimization.”
Operations Research , 55, 98–112.Scarf, Herbert (1958), “A min-max solution of an inventory problem.”
Studies in themathematical theory of inventory and production .See, Chuen-Teck and Melvyn Sim (2010), “Robust approximation to multiperiod inven-tory management.”
Operations research , 58, 583–594.Segal, Ilya (2003), “Optimal pricing mechanisms with unknown demand.”
American Eco-nomic Review , 93, 509–529.Smith, James E (1995), “Generalized chebychev inequalities: theory and applications indecision analysis.”
Operations Research , 43, 807–825.Suzdaltsev, Alex (2020), “Distributionally robust pricing in independent private valueauctions.”
Working paper . 41iesemann, Wolfram, Daniel Kuhn, and Melvyn Sim (2014), “Distributionally robustconvex optimization.”
Operations Research , 62, 1358–1376.Wilson, Robert (1987), “Game theoretic approaches to trading processessin tru (manbewley, ed., advances in economic theory: Fifth world congress.”Wolitzky, Alexander (2016), “Mechanism design with maxmin agents: Theory and anapplication to bilateral trade.”
Theoretical Economics , 11, 971–1004.
Appendix: main missing proofs
Proof of lemma 2: As m ∈ (0 , v ) n , results of Smith (1995) apply. In particular,inf F ∈ ∆( m,v ) R ( M, F ) = inf F ∈ ∆ fin ( m,v ) R ( M, F ) where ∆ fin ( m, v ) is a subset of ∆( m, v ) consistingof all distributions with finite support. So it is sufficient to prove that inf { R | ( m, R ) ∈ conv ( graph ( t )) } = inf F ∈ ∆ fin ( m,v ) R ( M, F ). But this is certainly true, as one of definitions ofa convex hull of a subset S of a Euclidean space is the set of all finite convex combinationsof points in S . (cid:3) Proof of lemma 4:
Consider some bidder i and some profile of values v ∈ W ≥ i ( p ) \ W i ( p ). It must be that v i = p i ( v − i ). Either p i ( v − i ) < v or p i ( v − i ) = v . Supposefirst that p i ( v − i ) < v . Then, any neighborhood of v has a nonempty intersection with { v : v i > p i ( v − i ) } , and hence, with W i ( p ). Thus, p i ( v − i ) − λv ≥ inf v ∈ W i ( p ) ( p i ( v − i ) − λv )which means that adding v to W i ( p ) won’t change the value of (8).Now suppose that p i ( v − i ) = v . Because v ∈ W ≥ i ( p ) \ W i ( p ), v ∈ { v : v i = p i ( v − i ) } and thus v ∈ ∪ ni =1 { v : v i = p i ( v − i ) } . Also, v / ∈ O i . Suppose first that v ∈ ∪ ni =1 { v : v i = p i ( v − i ) } \ ∪ ni =1 { v : v i > p i ( v − i ) } . By condition 3, v ∈ ∪ ni =1 O i and hence v ∈ O j for some j = i . Hence, v ∈ W j ( p ) for some j = i . Now suppose that v ∈ ∪ ni =1 { v : v i > p i ( v − i ) } .Again, as v i = p i ( v − i ), it means that v ∈ W j ( p ) for some j = i . This ensures the secondinequality in the following chain: p i ( v − i ) − λv = v − λv ≥ p j ( v j ) − λv ≥ inf v ∈ W j ( p ) ( p j ( v − j ) − λv ) , (35)Thus, adding v to W i ( p ) won’t change the value of (8). (cid:3) Proof of proposition 4:
Consider four sets of bidders: (i) those with v ∗ i = v ; (ii)those with v ∗ i ∈ (0 , v ); (iii) those with v ∗ i = 0 and λ ∗− i ( v ı n − − v ∗− i ) >
0; (iv) those with v ∗ i = 0 and λ ∗− i ( v ı n − − v ∗− i ) = 0. 42e prove that for every type of bidder, ˆ p i ≥ ˜ p i .First, if v ∗ i = v , then ˆ p i ( v − i ) ≡ v ≥ ˜ p i ( v − i ) ∀ v − i .Second, if v ∗ i ∈ (0 , v ), then the representation (15) is valid for ˜ p i and thus Lemma 6 isapplicable directly, with χ ∗− i = λ ∗− i , as in Case 1. Thus, ˆ p i ≥ ˜ p i .Third, suppose v ∗ i = 0 and λ ∗− i ( v ı n − − v ∗− i ) > . Define k i := λ ∗− i v ı n − + b i λ ∗− i ( v ı n − − v ∗− i ) . (36)We prove that, with such a choice of k i , p auxi ( v − i ) ≥ ˜ p i ( v − i ) at any point v − i .First, for v − i such that λ ∗− i v − i < λ ∗− i v ∗− i , p auxi ( v − i ) = 0, so the inequality holds.Now consider v − i such that λ ∗− i v − i ≥ λ ∗− i v ∗− i . Any such point can be represented as aconvex combination of v ı n − and a point u satisfying λ ∗− i u = λ ∗− i v ∗− i . Namely, v − i = t · u + (1 − t ) · v ı n − , where t = λ ∗− i ( v ı n − − v − i ) λ ∗− i ( v ı n − − v ∗− i )and u = v ı n − λ ∗− i ( v ∗− i − v − i ) λ ∗− i ( v ı n − − v − i ) + v − i λ ∗− i ( v ı n − − v ∗− i ) λ ∗− i ( v ı n − − v − i ) .t ≤ λ ∗− i v − i ≥ λ ∗− i v ∗− i .˜ p i is a convex function and thus˜ p i ( v − i ) = ˜ p i ( t · u + (1 − t ) · v ı n − ) ≤ t ˜ p i ( u ) + (1 − t )˜ p i ( v ı n − ) . (37)However, ˜ p i ( u ) = max { λ ∗− i u + b i , } = max { λ ∗− i v ∗− i + b i , } = v ∗ i = 0. Moreover,(1 − t )˜ p i ( v ı n − ) = max (cid:26) λ ∗− i ( v − i − v ∗− i ) λ ∗− i ( v ı n − − v ∗− i ) ( λ ∗− i v ı n − + b i ) , (cid:27) = p auxi ( v − i ) . Combining this with (37) yields the desired inequality p auxi ≥ ˜ p i .The second inequality ˆ p i ≥ p auxi follows from lemma 6 with χ ∗− i = k i λ ∗− i . Note thatthe supposition of the Lemma 6 holds due to the fact that p auxi ( v ı n − ) = ˜ p i ( v ı n − ) byconstruction and ˜ p i ( v ı n − ) ≤ v .Hence, ˆ p i ≥ p auxi ≥ ˜ p i as desired.Fourth, suppose v ∗ i = 0 and λ ∗− i ( v ı n − − v ∗− i ) = 0 . As every term λ ∗ j ( v − v ∗ j ) is nonneg-43tive, we must have λ ∗ j ( v − v ∗ j ) = 0 for all j = i .Denote by J max the set of indices j = i such that v ∗ j = v . Then it follows that0 = v ∗ i = max { X j ∈ J max λ ∗ j v + b i , } . But then, due to nonnegativity of λ ∗ j ,˜ p i ( v − i ) ≤ max { X j ∈ J max λ ∗ j v + b i , } = 0 . Hence, ˜ p i ( v − i ) ≡
0, and thus the inequality ˆ p i ≥ ˜ p i holds.We have shown that for every i , ˆ p i ≥ ˜ p i . This implies that all inf v − i ∈ [0 ,v ] n − ( p i ( v − i ) − λ ∗− i v − i ) increase when ˜ p i is replaced by ˆ p i . To cover the remaining infimum, inf v ∈ W ( p ) ( − λ ∗ v ),note that inf v ∈ W (ˆ p ) ( − λ ∗ v ) = + ∞ > inf v ∈ W (˜ p ) ( − λ ∗ v ) , (38)as W (ˆ p ) = ∅ due to the fact that v ∗ i = 0 for some i , and so the object is always allocatedin LSA( v ∗ ). (cid:3) Proof of proposition 5:
The fact that w ∗ i = 0 for some i follows from s ∗ < n .Now we show that w ∗ ∈ [0 , v ] n . w ∗ ≥ b restr ≥ v ı n ∈ V ∗ , it must be that( A restr ) − v ı s ∗ − v ı s ∗ X i>s ∗ λ ∗ i = b restr or v ı s ∗ = ( A restr ) − b restr + v ı s ∗ X i>s ∗ λ ∗ i ! , so again by Lemma 7, part 2, ( A restr ) − b restr ≤ v ı s ∗ , so w ∗ ≤ v ı n .By construction, w ∗ satisfies first s ∗ equations of the system v i = ˜ p i ( v − i ). It remainsto show that it satisfies the equations s ∗ + 1 , . . . , n , that is0 = max ( s ∗ X i =1 λ ∗ i w ∗ i + b j , ) , j > s ∗ . By the ranking of b i , it is sufficient to show that s ∗ X i =1 λ ∗ i w ∗ i ≤ − b s ∗ +1 . Writing the LHS using Lemma 7, part 3, we get s ∗ P i =1 λ ∗ i λ ∗ i b i − s ∗ P i =1 λ ∗ i λ ∗ i ≤ − b s ∗ +1 . Substituting for b i from (23), after simplifications one gets λ ∗ s ∗ +1 ≥ s ∗ P i =1 λ ∗ i λ ∗ i − P i = s ∗ +1 λ ∗ i + 1 s ∗ P i =1 λ ∗ i λ ∗ i . (39)But (39) is true because λ ∗ s ∗ +1 > P i = s ∗ +1 λ ∗ i λ ∗ i − ≥ s ∗ P i =1 λ ∗ i λ ∗ i − P i = s ∗ +1 λ ∗ i + 1 s ∗ P i =1 λ ∗ i λ ∗ i , where the first inequality is equivalent to n P i =1 λ ∗ i λ ∗ i >
1, the supposition in
Subcase 2 , andthe second is equivalent to s ∗ X i =1 λ ∗ i ≤ X i = s ∗ +1 λ ∗ i . (cid:3) nline Appendix A More missing proofs
Proof of proposition 1:
Denote by R ( r, λ ) the value of (10) when the thresholdfunctions p are given by (5) for some r ∈ [0 , v ] n . The revenue guarantee of the auctionis given by R ( r ) := max λ ∈ R n R ( r, λ ) (supremum is achieved by lemma 3). We will prove that R ( r ) is upper semi-continuous, which implies the result. Note that R ( r, λ ) is continuouson (0 , v ] n × R n , so, by Berge’s Maximum theorem argument R ( r ) is continuous on (0 , v ] n .Also, R ( r, λ ) is continuous on R × R n (when its domain is restricted to R × R n ) so R ( r ) is continuous on R when its domain is restricted to R . Hence, to show that R ( r )is upper semi-continuous on full domain, it is sufficient to show that for every sequenceof points r k ∈ (0 , v ] n with limit r ∈ R , we have lim k →∞ R ( r k ) ≤ R ( r ).To this end, extend the function R ( r, λ ) from (0 , v ] n to the whole [0 , v ] n by continuityand call the extension R ext ( r, λ ) with R ext ( r ) = max λ ∈ R n R ext ( r, λ ). Note that R ext ( r, λ ) ≤ R ( r, λ ) and thus R ext ( r ) ≤ R ( r ) if r ∈ R , and R ext ( r, λ ) = R ( r, λ ), R ext ( r ) = R ( r ) other-wise. By the usual argument, R ext ( r ) is continuous. Thus, lim k →∞ R ( r k ) = lim k →∞ R ext ( r k ) = R ext ( r ) ≤ R ( r ). (cid:3) Proof of lemma 3:
Denote the function being maximized in (7) by G ( λ ): R n → R .This function is concave; thus it is continuous. First, observe that for any unboundedsequence λ k , the sequence G ( λ k ) is unbounded from below. To see this, note that if λ ki isnot bounded from above, one can set v i = v in minimization in (7), whereas if λ ki is notbounded from below, one can set v i = 0; hence, G ( λ k ) is majorized by a sequence that isunbounded from below.Now take any maximizing sequence λ k (i.e. a sequence such that lim k →∞ G ( λ k ) =sup λ G ( λ )). As the sequence G ( λ k ) is clearly bounded from below, by the above ob-servation we know that λ k must lie in a bounded set Q . Therefore it has a subsequence λ k s converging to some λ ∗ ∈ cl ( Q ) ⊂ R n . But then, by continuity of G , sup λ G ( λ ) =lim s →∞ G ( λ k s ) = G ( λ ∗ ). (cid:3) Proof of lemma 7:
Consider the homogeneous system Av = 0. By subtracting row j > λ ) v = (1 + λ j ) v j for all j >
1. Plugging this in equation 1,one gets (cid:16) − P ni =1 λ i λ i (cid:17) v = 0. Thus, the system has a unique (trivial) solution if andonly if 1 = P ni =1 λ i λ i . Thus, det( A ) = 0 if and only if 1 = P ni =1 λ i λ i . Because determi-46ant is a multilinear function of rows of A , it must be equal to C (cid:18) − n P i =1 λ i λ i (cid:19) n Q i =1 (1 + λ i )for some C ∈ R . As det( A (0)) = 1, C = 1. This proves part 1. It also follows that theset of solutions to the system is at most one-dimensional. This proves part 4.To prove parts 2 and 3, consider a non-homogeneous system Av = b . By subtractingrows as above one obtains that its unique (provided that det( A ) = 0) solution v ( b ) isgiven by v i ( b ) = b i − P k = i λ k λ k ! + P k = i λ k λ k b k (1 + λ i ) (cid:18) − n P i =1 λ i λ i (cid:19) . As A − ij = ∂v i /∂b j , part 3 follows from λ ≥ A ) >
0. Part 4 follows from theabove analytic solution and the fact that λv ( b ) = (1 + λ ) v ( b ) − b . (cid:3) Proof of lemma 8:
The proof is by solving the optimization problem max λ P ni =1 λ i λ i subject to (24) and the nonnegativity constraint. The solution is λ i = λ j = n − . (cid:3) Proof of lemma 11: inf W ( p ) ( − λv ) = − λr whenever W ( p ) = ∅ . Now consider theproblem inf v − i ∈ [0 ,v ] n − ( p i ( v − i ) − λ − i v − i ). Consider the minimization over some v j , j = i with v s , s / ∈ { i, j } fixed. Given (5), the objective is convex and piecewise-affine and λ j ≥
0, sothe optimal choice of v j is either v (if λ j ≥ v − r i v − r j ) or such that v j − r j v − r j = max { , max s = i,j v s − r s v − r s } (otherwise). But this is true for every j , v j = v for some j implies that v s = v for all s = i . If, on the other hand, for every j the second possibility materializes, we obtainthat v j − r j v − r j = η , η ∈ [0 ,
1] for all j = i . Note in the first case ( v j = v for all j = i ) thiscondition is also satisfied. Thus, it is sufficient to optimize over η . Given the linearity ofthe objective, the two possible corner solutions will yield either an infimum of r i − λ − i r − i or v − v P j = i λ j from which the expression (25) follows. (cid:3) Proof of Lemma 12:
For brevity, we will write r ∗ instead of r ∗ ( λ ).Define g ( r ) := min n min i ( r i − λ − i r − i − λ i v ) , − λr o , r > , min i ( r i − λ − i r − i − λ i v ) , o/w . (40)We will prove that if P i λ i λ i ≤ r ∗ maximizes g ( r ). Then it will follow that r ∗ maximizes R ( r, λ ).Note that g ( r ) is discontinuous at any point of the set R := { r ∈ [0 , v ] n : r i =0 for some i } . We will show directly that g ( r ∗ ) ≥ g ( r ) for any other point r ∈ [0 , v ] n . To47his end, we will construct a finite sequence of points starting with any point r ∈ (0 , v ] n and ending with r ∗ such that the value of g weakly increases at every step. Denote E i ( r ) := r i − λ − i r − i − λ i v . We will proceed by considering two cases: (1) the startingpoint r / ∈ R ; (2) r ∈ R . Case 1.
The starting point r / ∈ R . Note that if r i > r ∗ i for some i , g ( r ) is improvedby changing r i to r ∗ i . Indeed, r i > r ∗ i is equivalent to E i ( r ) > − λr , so E i is higher thanthe overall minimum in the first line in (40). Decreasing r i to r ∗ i will weakly increase − λr and E j ( r ) for all j = i , thus weakly increasing the value of g . If the new point ˜ r is in R , proceed directly to Case 2 below. If after coordinates r i such that r i > r ∗ i were alllowered to r ∗ i , we are still not in R proceed as follows.Consider any r ∈ (0 , v ] n such that r i ≤ r ∗ i for all i . Note that E i ( r ) ≤ − λr for all i .Denote by E (1) ( r ) the lowest of E i and by E (2) ( r ) the second lowest. Assume that thenumbers E i ( r ) are not all identical, i.e. E (1) ( r ) < E (2) ( r ) (if they are not, skip right awayto the next paragraph). Denote by L ( r ) the set { i : E i ( r ) = E (1) ( r ) } and by L ( r ) theset { i : E i ( r ) = E (2) ( r ) } . Now change all r i for i in L ( r ) in such a way that the newpoint ˜ r satisfies L (˜ r ) = L ( r ) ∪ L ( r ), i.e. the values of E i for i ∈ L ( r ) and i ∈ L ( r ) areequalized. The key is to show that g ( r ) won’t decrease after this move. First, all prices r i for i ∈ L ( r ) will increase after this move, as the inequality E (1) ( r ) < E (2) ( r ) is equivalentto (1 + λ i ) r i − λ i v < (1 + λ j ) r j − λ j v for all i ∈ L ( r ) and j ∈ L ( r ). Second, because E i = E j for all i, j ∈ L ( r ), r j = (1+ λ i ) r i +( λ j − λ i ) v λ j for all i, j ∈ L ( r ) and thus g ( r ) = E (1) ( r ) = r i (1 + λ i ) − X j ∈ L ( r ) λ j λ j + const, (41)for all i ∈ L ( r ), where const does not depend on r i for i / ∈ L ( r ). As r i will increaseduring the move and, by supposition, P λ i λ i ≤ g ( r ) will weakly increase. Also, notethat the new prices ˜ r are still weakly lower than r ∗ because E i ( r ) ≤ − λr still holds forall i after the move.Now iterate this procedure until all E i ( r ) are equalized. At each step, the value of g ( r )will weakly increase, and the eventual price ˜ r satisfies ˜ r ≤ r ∗ . At the last step, increase ˜ r to r ∗ . g ( r ) will grow again due to the representation (41). This finishes Case 1.Case 2.
The starting point r ∈ R . We will extend g ( r ) to the set R ext := { r ∈ ( −∞ , v ] n : r i = 0 for some i } by the same formula as on R and show that for every r ∈ R there is a point r ∗∗ ∈ R ext such that g ( r ∗∗ ) ≥ g ( r ). After that we show that g ( r ∗ ) ≥ g ( r ∗∗ ) for all such r ∗∗ . 48iven a point r ∈ R , take any k such that r k = 0. If E i ( r ) > E k ( r ) one can lower r i down to the point at which E i ( r ) = E k ( r ), without harming g (note that it is possiblebecause R ext is unbounded from below.) If, one the other hand, E i ( r ) ≤ E k ( r ) for all i ,the same construction as in Case 1 shows that one can weakly increase prices in a certainfashion such that g is again unharmed. The final point of this process will be a point r ∗∗ ( k ) ∈ R ext such that E i ( r ∗∗ ) = E k ( r ∗∗ ) for all i and g ( r ∗∗ ( k )) ≥ g ( r ). r ∗∗ ( k ) is given by r ∗∗ i ( k ) = λ i − λ k λ i v for i . It remains to prove that g ( r ∗∗ ( k )) ≤ g ( r ∗ ) forall k . The inequality g ( r ∗∗ ( k )) ≤ g ( r ∗ ) reads as − X j = k λ j λ j − λ k λ j v − λ k v ≤ − n X i =1 λ i λ i v, which is equivalent to λ k n X i =1 λ i λ i − ! ≤ , which is certainly true due to the supposition of the lemma. (cid:3) Proof of Lemma 13:
We will prove that for all λ such that P i λ i λ i ≥ r R ( r, λ ) = mλ + v (1 − P i λ i ). Then the result will follow because when the in-equality is strict, one can lower some λ j to make it equality, and the value of revenue willstrictly increase. If lowering one λ j is not enough (i.e., P i λ i λ i > λ j = 0),one can start lowering another λ j , etc. One will always reach the equality P i λ i λ i = 1because its LHS is zero when all λ i are equal to zero.Indeed, by (25), we have that R ( r, λ ) ≤ mλ + v (1 − P i λ i ) for all r . On the otherhand, R ( r ∗ ( λ ) , λ ) = mλ + min ( v − n X i =1 λ i ! , − n X i =1 λ i λ i v ) . If P i λ i λ i ≥
1, the first argument of the minimum is weakly lower, so R ( r ∗ ( λ ) , λ ) = mλ + v (1 − P i λ i ). Hence, the upper bound is achieved, and thus max r R ( r, λ ) = mλ + v (1 − P i λ i ). (cid:3) Proof of Lemma 14:
First, solve the relaxed problem without the constrain (27).The solution is λ ∗ i = q vv − m i − . This solution is the solution to the unrelaxed problemwhenever is satisfies the constraint (27), which is exactly when n P i =1 p − m i /v ≥ n − n P i =1 p − m i /v < n −
1. Denoting by ξ the Lagrange multiplier on (27) and by49 i on nonnegativity constraints, one can easily show that the multipliers ξ ∗ = v − n − k ∗ ) n X i = k ∗ √ v − m i ! ,κ ∗ i = max { ξ ∗ − m i , } , together with the proposed solution (29), satisfy first-order and complementary slacknessconditions. As there is no other solution to first-order and complementary slackness con-ditions, the objective function is continuous, and the feasible set is compact, we concludethat λ ∗ is the solution to the problem (26)-(27), as desired. (cid:3) Proof of Proposition 6:
Consider the case of low means first. In this case, all λ ∗ i are strictly positive and P i λ ∗ i λ ∗ i <
1. Hence, at all steps in the proof of Lemma 12, thevalue of g ( r ) increases strictly. Furthermore, g ( r ∗ ( λ ∗ )) = − P i λ ∗ i λ ∗ i v < v (1 − P i λ ∗ i ).Hence, R ( r, λ ∗ ) = g ( r ) at all steps in the proof of Lemma 12. Thus, r ∗ ( λ ∗ ) is the uniquemaximizer of R ( r, λ ∗ ). Plugging λ ∗ given by (28) to r ∗ i = λ ∗ i v λ ∗ , one gets the desired answer(30).Now suppose means are high. First, note that if λ ∗ i = 0, then lowering r i to r ∗ i = 0won’t change the value of g ( r ), hence, any price r i ∈ [0 , v ] is optimal. Without loss ofgenerality, now suppose that all bidders are strictly included. At all other steps in theproof of Lemma 12 g ( r ) will increase strictly except the last step where all E i ( r ) arealready equalized and one is ready to equalize them with − λr . Indeed, as L ( r ) nowincludes all bidders, and P i λ ∗ i λ ∗ i = 1, the expression in parentheses in (41) is zero. Thus,a vector of prices r is optimal if and only if E i ( r ) = E j ( r ) = g ( r ∗ ( λ ∗ )) = v (1 − P i λ ∗ i )for all i, j and r i ≤ r ∗ i ( λ ∗ i ) for all i . Thus, r should satisfy a non-homogeneous systemof linear equations given by A ( λ ∗ ) r = b where b i = v − P j = i λ ∗ j ! . As P i λ ∗ i λ ∗ i = 1, byLemma 7, part 1, A ( λ ∗ ) is singular and its rank is n −
1. Because r ∗ ( λ ∗ ) is a solution tothe non-homogeneous system and v ı n − r ∗ ( λ ∗ ) is a solution to the homogeneous system,the set of solutions is given by { r ∗ ( λ ∗ ) − α ( v ı n − r ∗ ( λ ∗ )) | α ∈ R } . Thus, every vector ofoptimal prices r satisfies v ı n − ˆ r = (1 + α )( v ı n − r ∗ ( λ ∗ ), thus v − r i v − ˆ r j = v − r ∗ i v − r ∗ j = 1 + λ ∗ j λ ∗ i = s v − m i v − m j , where we used the expressions for λ ∗ i from (29). Finally, summing the inequalities r i ≤ ∗ i ( λ ∗ ), one gets n X i =1 r i ≤ n X i =1 λ ∗ i λ ∗ i v = v. (cid:3) Proof of Proposition 7:
First, take any mechanism M = ( p ( v ) , p ( v )) that satis-fies the conditions in the proposition. It is straightforward to check, using the representa-tion (8), that R ( M, λ ∗ ( LSA opt )) = R ∗ where λ ∗ ( LSA opt ) are the optimal Lagrange mul-tipliers for the optimal linear score auction, given by (28). As R ∗ = R ( M, λ ∗ ( LSA opt )) ≤ R ( M, λ ∗ ( M )) ≤ R ∗ , it must be that R ( M, λ ∗ ( M )) = R ∗ , so M is optimal.Now, take any optimal mechanism M . By lemma 15, λ ∗ i ( M ) >
0. Hence, when M issubmitted as a starting mechanism to the proof of theorem 1, one gets in Grand caseI . Denote by ˜ p ( M ) the modified threshold functions for M as constructed in the proof oftheorem 1, given by (12). We will reconstruct ˜ p ( M ) from optimality of M . First, note,that the LSA that weakly dominates M should in fact be the optimal LSA. Because anydominating LSA is such that its vector of reserve prices r is a fixed point of ˜ p , it mustby that r ∗ , as defined by (30) should be a fixed point of ˜ p ( M ). On the other hand, bylemma 15, ˜ p i must have slope of λ ∗ i given by (28). Thus, ˜ p i should be given by the RHS incondition 1 in the proposition. As ˜ p i ≤ p i , condition 1 in the proposition must be fulfilled.To establish condition 2, note that because M is optimal, revenue may not strictly in-crease at every step in the proof of theorem 1, and thus, inf W ( p ) ( − λ ∗ ( M ) v ) = inf W (ˆ p ) ( − λ ∗ ( M ) v )where p are the threshold functions corresponding to M and ˆ p are the threshold func-tions for the optimal linear score auction. Thus, the set W ( p ) must lie weakly be-low the line λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ . However, if condition 2 is violated, this isnot true. Indeed, suppose p ( v ) > ( λ ∗ r ∗ + λ ∗ r ∗ − λ ∗ v ) /λ ∗ for some v ≤ r ∗ . Takeany v ∈ (( λ ∗ r ∗ + λ ∗ r ∗ − λ ∗ v ) /λ ∗ ) , p ( v ). Then, p ( v ) > v by construction and p ( v ) > v because condition 1 is satisfied for the function p ( v ). Hence, ( v , v ) isa point in W ( p ) that lies above λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ . Contradiction.To establish condition 3, suppose to the contrary that there exist points v ′ and v ′′ such that r ∗ ≤ v ′ < v ′′ and p ( v ′′ ) < p ( v ′ ). Consider any v ∈ ( p ( v ′′ ) , p ( v ′ )) . First,note that for any such v we must have p ( v ) ≥ v ′′ . If not, then the point ( v ′′ , v ) provesthat the functions p violate the supply constraint. Second, for any such v we must have p ( v ) ≤ v ′ . If not, the point ( v ′ , v ) belongs to the set W ( p ) and lies above the line λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ , which is impossible by condition 2. Summing up, we have p ( v ) ≤ v ′ < v ′′ ≤ p ( v ) – a contradiction. Thus, p ( v ) has to be weakly increasing.Analogously for p ( v ). 51inally, take any interval ( v ′ , v ′′ ), v ′ ≥ r ∗ such that p ( v ) is strictly increasing onit and any point v ∈ ( v ′ , v ′′ ). Suppose to the contrary that p ( p ( v )) = v . Either p ( p ( v )) > v or p ( p ( v )) < v . If p ( p ( v )) > v , take any point v ∈ ( v , p ( p ( v ))).Then, as p ( v ) is weakly increasing overall and strictly increasing on ( v ′ , v ′′ ), it must bethat p ( v ) > p ( v ). But then the point ( v , p ( v )) belongs to the set W ( p ) and liesabove the line λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ , which is impossible by condition 2. If, on theother hand, p ( p ( v )) < v , take any point v ∈ ( p ( p ( v )) , v ). Again, it must be that p ( v ) < p ( v ). But then the point ( v , p ( v )) proves that the functions p violate thesupply constraint. Contradiction. Analogously for p ( v ). (cid:3) Proof of Proposition 8:
The proof of sufficiency is the same as in the proof ofproposition 7. The proof of necessity is as follows.Take any optimal mechanism M . By lemma 15, λ ∗ i ( M ) is given by (29). By the samelogic as in the proof of proposition 7, some ˆ r that satisfies the conditions (31) is a fixedpoint of ˜ p ( M ). If ˆ r >
0, this, along with λ ∗ i , pins down ˜ p ( M ) to be the functions givenin the condition 1 in the proposition (regardless of ˆ r ). If ˆ r i = 0 for some i , we have0 = max { λ ∗− i ˆ r − i + b i , } . If λ ∗− i ˆ r − i + b i <
0, then inf v − i ( p i ( v − i ) − λ ∗− i v − i ) = b i < − λ ∗− i ˆ r − i =inf v − i (ˆ p i ( v − i ) − λ ∗− i v − i ) so M is not optimal. Thus, we must have λ ∗− i ˆ r − i + b i = 0, so ˜ p ( M )are reconstructed unambiguously to be functions defined in the RHS of conditions 1 and2 in the proposition. Thus, condition 2 holds.To establish condition 3, note that for any optimal LSA ˆ p , revenue is equal to λ ∗ m +inf W ( ˆ p ∗ ) ( − λ ∗ v ) where ˆ p ∗ is the optimal LSA with the highest prices r ∗ . Thus, for M to beoptimal it must be that inf W ( p ) ( − λ ∗ v ) ≥ inf W ( ˆ p ∗ ) ( − λ ∗ v ). Thus, the set W ( p ) must lie weaklybelow the line λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ . Then, one obtains condition 3 similarly to proofof necessity of condition 2 in proposition 7.Finally, to establish condition 1, note that if p i ( v − i ) > r ∗ i + λ ∗ i ( v − i − r ∗− i ) for some v − i ≥ r ∗− i , then the point ( v − i , v i ) where v i is any point in ( r ∗ i + λ ∗ i ( v − i − r ∗− i ) , p i ( v − i ))belongs to W ( p ) lies above the line λ ∗ v + λ ∗ v = λ ∗ r ∗ + λ ∗ r ∗ , which is impossible bycondition 3. (cid:3) Proof of proposition 10:
As noted in the main text, the support of any worst-casedistribution must be contained in the set of minimizers of t LSA ( v ) − λ ∗ ( LSA ) v . It remainsto find the optimal Lagrange multiplier λ ∗ as a function of the generalized reserve prices r . λ ∗ maximizes R ( r, λ ) = mλ + min { v (1 − λ − λ ) , r − r λ − vλ , r − r λ − vλ , − rλ } .
52t follows from a straightforward, but tedious analysis that • If r < r ( r ) and r < v − r , λ ∗ = v − r v − r , λ ∗ = v − r v − r ; • If r > r ( r ) but r < v − r , λ ∗ i = r i v − r i , i = 1 , • If r > v − r , either λ ∗ = r v − r and λ ∗ = v − r v − r or λ ∗ = v − r v − r and λ ∗ = r v − r .From this, the statement of the proposition follows. (cid:3) Proof of theorem 2:
The proof is the same as for theorem 1 with a few modifica-tions. Lemma 4 does not hold, as the inequality (35) does not necessarily hold for i = 2, j = 1. Define ˜ W := { v : v ≥ p ( v ) and v ≤ p ( v ) } . We have inf v ∈ W ( p ( v ) − λv ) =inf v ∈ ˜ W ( p ( v ) − λv ). For bidder 1, we still have inf v ∈ W ( p ( v ) − λv ) = inf v ∈ W ≥ ( p ) ( p ( v ) − λv ).The consideration of Grand case II goes identically to theorem 1, so consider
Grandcase I . We have inf v ∈ ˜ W ( p ( v ) − λ ∗ v ) = inf v ∈ [0 , ˜ v ] ( p ( v ) − λ ∗ v ) − λ ∗ v , where ˜ v := p ( v ). Forbidder 1, nothing changes. Thus, at the first step of transforming a given pair functions( p ( v ) , p ( v )) we transform p ( v ), as before, to ˜ p ( v ), given by (12), and transform p ( v ) to ˜ p ( v ) := max { λ ∗ v + inf w ∈ [0 , ˜ v ] ( p ( w ) − λ ∗ w ) , } , v ≤ ˜ v ; v , v > ˜ v . (42)By the same logic as in the proof of proposition 2, R (˜ p, λ ∗ ) ≥ R ( p, λ ∗ ).Note that for all v ∈ [0 , v ], ˜ p ( v ) ≤ ˜ p ( v ) ≤ p ( v ) = ˜ v , as ˜ p is nondecreasingand ˜ p ( v ) ≤ ˜ p ( v ) for all v . Thus, one may consider ˜ p as a continuous map from[0 , ˜ v ] × [0 , v ] to itself. Thus, a fixed point exists. The rest of the proof goes analogouslyto the proof of theorem 1, except the consideration of Subcase 2 of Case 3 . There, oneshows directly that λ ∗ λ ∗ + λ ∗ λ ∗ > b < b = 0. Then, the pointdefined by w ∗ = (max { b , } , max { b , } ) will be a desired fixed point with at least onezero coordinate.Note that the construction will identify a dominating linear score auction such thatthe boundary between W and W goes through the point (˜ v , v ). (cid:3) Proof of proposition 9:
Again, because for any optimal mechanism λ ∗ ≥
0, onecan restrict attention only for such λ . Solving the inner problems as in the proof of 11,53ne obtains that R ( p, λ ) is equal to λm + min n ˜ v − λ v − λ v , v − λ ˜ v − λ v , min i ( r i − λ − i r − i − λ i v i ) , − λr o , r > ,λm + min n ˜ v − λ v − λ v , v − λ ˜ v − λ v , min i ( r i − λ − i r − i − λ i v i ) o , o/w . (43)The problem is to maximize this function over r ∈ [0 , v ] × [0 , v ], λ ∈ R , and˜ v ∈ [0 , v ]. Again, it is more convenient to optimize first over r and ˜ v , and then over λ .Optimizing over ˜ v , one does weakly better equalizing ˜ v − λ v − λ v and v − λ ˜ v − λ v ,thus setting ˜ v ∗ = λ v + v λ +1 . The common value of the two expressions will be λ v + v λ + 1 − λ v − λ v . (44)Then, defining r ∗ i ( λ ) = λ i λ i v i , the proofs of lemmas 12 and 13 go through with (44)replacing v (1 − P i λ i ). (Also, one lowers the value of λ , not any λ to obtain a strictimprovement.Thus, we arrive at a problem similar to (26)-(27):max λ ≥ X i =1 m i λ i − λ i λ i v i (45)s.t. X i =1 λ i λ i ≤ p − m /v + p − m /v >
1) and binds otherwise. If itdoes not bind, we obtain λ ∗ i = r v i v i − m i − , and thus, recovering optimal prices using same logic as in the proof of proposition 6,one gets (32). To recover all optimal slopes in this case (aside from an optimal slopecorresponding to ˜ v ∗ = λ v + v λ +1 . ), one uses proposition 7 (its logic is unchanged when upperbounds are different). Restricting condition 1 of proposition 7 to linear score auctions,one gets the condition (33).If the constraint (46) binds, one obtains from first-order conditions that the equation(34) must hold for the optimal λ ∗ . But note that when the constraint binds, λ ∗ λ ∗ = 1,and thus, the optimal slope γ ∗ is unique (as in Figure 8). Hence, γ ∗ = λ ∗ , so γ ∗ satisfies5434).To obtain the set of optimal prices, one again uses the same proof as in proposition6. (cid:3) B Worst-case distributions
The proof of theorem 1 didn’t consider worst-case distributions thanks to the representa-tion afforded by linear programming duality. However, knowing them may also be usefulto know how Nature reacts to a given mechanism and thus gain a deeper understand-ing of the game between the seller and Nature. In this section, we discuss worst-casedistributions for corner-hitting linear score auctions (not necessarily optimal ones).The worst-case distributions can be deduced from a complementary slackness resultin Smith (1995). Namely, it follows that for any mechanism M any distribution solvingthe inner problem (1) also solves the unconstrained probleminf F ∈ ∆ (cid:8) t M ( v ) − λ ∗ ( M ) v (cid:9) , where λ ∗ ( M ) is the solution to the dual problem (7) for the mechanism M and ∆ is the setof all Borel distributions on [0 , v ] n . That is, if a worst-case distribution exists, its supportmust be contained in set of minimizers of the function t M ( v ) − λ ∗ ( M ) v over [0 , v ] n .Consider n = 2 and any corner-hitting linear score auction with parameters r =( r , r ). Before we proceed, we change the rules a little by stating that when v i ≤ r i for i = 1 ,
2, the object is unsold. This does not change the infimum over distributions butensures that a worst-case distribution always exists. Then, when λ ∗ i > i = 1 , r i < m i ) it follows immediately from the structure of the function t LSA ( v ) that the support of any worst-case distribution is contained in the set S ∗ := { ( r , r ) } ∪ { v : v i ≥ r i , max { v , v } = v } . Depending on the means m and prices r , different subsets of the set S ∗ may be selectedas the support of a worst-case distribution. It turns out that there may be three types ofworst-case distributions when r i < m i , described in the following table:Type SupportI Any subset of { v : v i ≥ r i , max { v , v } = v } II { ( r , r ) , ( r , v ) , ( v, r ) } III { ( r , r ) , ( r , v ) or ( v, r ) , ( v, v ) } r for fixed means m . Interestingly, if prices are relatively low, Nature chooses a distributionsuch that sale happens with probability 1 and does not try to “undercut” the seller.Only when prices rise beyond a certain boundary, Nature starts to induce no trade withpositive probability in the worst case. When prices rise even further, Nature switchesfrom inducing negative correlation between values (so that the lowest bid is always verylow) to inducing positive correlation (so that two bidders are effectively replaced by one). r r vv I II III m m Figure 10: Different types of distributions are worst-case for different prices r Define r ( r ) := m ( v − r ) − v ( v − m ) m − r . Proposition 10.
Suppose n = 2 and and consider a corner-hitting linear score auctionwith parameters r < m and r < m . The rules are modified so that when v i ≤ r i for i = 1 , , the object is unsold. Then, the set of worst-case distributions is a subset ∆ W C of ∆( m, v ) such that for every F ∈ ∆( m, v ) :1. If r < r ( r ) and r < v − r , F ∈ ∆ W C if and only if supp( F ) ⊆ { v : v i ≥ r i , max { v , v } = v } ;2. If r > r ( r ) but r < v − r , F ∈ ∆ W C if and only if supp( F ) = { ( r , r ) , ( r , v ) , ( v, r ) } ;3. If r > v − r , F ∈ ∆ W C if and only if supp( F ) ⊆ { ( r , r ) } ∪ { v : v = v, v ≥ r } or supp( F ) ⊆ { ( r , r ) } ∪ { v : v = v, v ≥ r } . Note that F is pinned down from the means constraints and the support if the supportcontains three points. 56 Mechanisms with negative λ i Example 1.
Suppose there are two bidders with v i ∈ [0 , v > p ( v ) = r + k · v where r > k > r + k <
1. That is, the price that the second bidder pays upon gettingthe object depends positively on first bidder’s report.If m ∈ ( r + km , − (1 − r − k ) m ), the worst-case distribution may be shown to be aternary distribution with support { (0 , r ) , (0 , , (1 , r + k ) } . Bidder 2 does not get the objectwhen v = (0 , r ) or v = (1 , r + k ). So Nature is undercutting the seller by putting mass atthese value profiles. The worst-case expected revenue if m ∈ ( r + km , − (1 − r − k ) m )is − kr − r m + r m − r − r , thus λ ∗ = − kr − r <
0. The seller would lose (and Nature win) from a higher mean ofbidder’s 1 value.Intuitively, with a higher m , Nature can put more mass on the point (1 , r + k ) butbecause r + k > r , this would mean undercutting the seller with higher (on average)values of v . With a fixed m , this means that the total mass of no-sale value profiles { (0 , r ) , (1 , r + k ) } may also be increased – and this harms the seller. Example 2.
Suppose there are two bidders with different upper bounds and meanssuch that v > m > v .Consider a corner-hitting LSA with r ∈ ( v , m ), r < v . Suppose also that theparameters satisfy m < v − v v − r ( v − m ) . It this case, the worst-case distributions is a three-point distribution on ( r , v , v , v ) (if the tie breaking is such that the second bidder gets the object if v =( v , v ), otherwise this point can be approximated). But then, the higher m is, thehigher probability Nature can put on the point ( v , v ), which harms the seller becauseher revenue is v if v = ( v , v ) and it is r > v if v = ( v , (cid:16) − r v (cid:17) m + m − r v − r r , so λ ∗ = 1 − r v <<