Project selection with partially verifiable information
PPro ject selection with partially verifiableinformation ∗ Sumit Goel † Wade Hann-Caruthers ‡ Abstract
We consider a problem where the principal chooses a project froma set of available projects for the agent to work on. Each projectprovides some profit to the principal and some payoff to the agentand these profits and payoffs are the agent’s private information. Theprincipal has a belief over these values and his problem is to find an in-centive compatible mechanism without using transfers that maximizesexpected profit. Importantly, we assume partial verifiability so thatthe agent cannot report a project to be more profitable to the principalthan it actually is. In this setup, we find a neat characterization of theset of incentive compatible mechanisms. Using this characterization,we find the optimal mechanism for the principal when there are twoprojects. Within a subclass of incentive compatible mechanisms, weshow that a single cutoff mechanism is optimal and conjecture that itis the optimal incentive compatible mechanism. ∗ The authors would like to thank Laura Doval for incredible guidance and extremelyuseful conversations. We would like to further thank EC referees and Caltech TheoryLunch attendees for useful feedback. † California Institute of Technology, [email protected] ‡ California Institute of Technology, [email protected] a r X i v : . [ ec on . T H ] J u l Introduction
There exist many situations in which a principal, who has to make a deci-sion, relies on the information reported by an agent about the true state ofthe world. Since their preferences might be misaligned, the agent has to beincentivized to truthfully reveal his private information. These incentive con-straints depend on the agent’s set of feasible messages which has typicallybeen assumed to be fixed and independent of the true state of the world.This is a valid assumption if the agent can always manipulate informationwithout incurring any cost or penalties. However, the agent’s ability to ma-nipulate may be limited due to various physical and mental costs associatedwith lying or due to existence of some form of evidence which the agent isrequired to furnish. The existence of such environments where manipulationis costly motivates the literature on mechanism design with partially verifi-able information.The assumption of partial verifiability can take different forms dependingupon the environment. In this paper, we focus on settings where the agentcannot oversell an alternative to the principal. In particular, we consider aproblem where the principal has to choose a project from N different projectsbut does not know how profitable each is. The agent gets an independentpayoff from working on the chosen project and is aware of both the principal’sprofit and his own payoff from each of the N projects. Thus, the principalneeds to incentivize the agent to reveal this private information. The ideaof no-overselling is formalized by assuming that the agent cannot report aproject to be more profitable for the principal than it actually is. With thispartial verifiability constraint of no-overselling, we consider the problem ofcharacterizing the set of implementable mechanisms and finding one thatmaximizes the principal’s expected profit.Since partial verifiability reduces the set of incentive constraints that the2rincipal faces, the set of implementable mechanisms is potentially larger.Thus, there is reason to believe that with partial verifiability, the principalcan obtain a higher expected profit than what is possible in the standardproblem with no constraints on the message space. However, previous re-search suggests that this is not always the case. For instance, Celik [8] con-siders the problem of regulation of a monopolist in which the monopolist canunderstate its productivity by hiding equipment but cannot overstate it bydisclosing equipment that does not exist. He identifies a sufficient conditionunder which the solutions to the two problems coincide. Another example isMoore [22], who shows that in an auction environment where the buyer can-not imitate higher valuation types, the solution to the principal’s expectedrevenue maximization problem remains the same. Thus, even though it seemsintuitive that partial verifiability should allow the principal to obtain higherrent, the above results suggest it is not always true. Our objective is to studyif the no-overselling constraint in our setup leads to a different solution andhigher profits for the principal.We restrict attention to truthfully implementable direct mechanisms with-out transfers in our analysis [15]. In these mechanisms, the principal com-mits to choosing a project based on the profits and payoffs reported by theagent. We obtain a neat characterization of incentive compatible (IC) mecha-nisms and call them “table mechanisms.” In a table mechanism, the principalcommits to choosing the agent’s favorite project among those that satisfy acriterion. The set of projects satisfying the criterion (those that are “onthe table”) is increasing in the reported profit values and always includesa default project. Next, we show that the optimal IC mechanism for twoprojects is a cutoff mechanism. In the class of cutoff mechanisms, we showthat a single cutoff mechanism defined by a parameter c is optimal. In thismechanism, a project is on the table if and only if its reported profit meets thecutoff profit c . This cutoff mechanism is different from the optimal incentive3ompatible mechanism in the standard problem and leads to higher profitsfor the principal. Indeed, without partial verifiability, the best the principalcan do is select a project without soliciting any information from the agent.Thus, in contrast to the results mentioned above, there is a significant gainfor the principal from the partial verifiability constraint of no-overselling inthis setup. Mechanism design has often been used to deal with problems of asymmetricinformation [24], [25]. An important theme in the literature on mechanismdesign is characterizing the set of implementable mechanisms [14], [27], [9],[15]. In particular, Green and Laffont [15] introduce the idea of partially veri-fiable information in mechanism design and identify a necessary and sufficientcondition, called the “Nested Range Condition”, under which the set of im-plementable mechanisms coincides with the set of truthfully implementablemechanisms (i.e. the revelation principle holds). Later research focuses onidentifying implementability conditions in other environments with partiallyverifiable information; [18] looks at Nash implementation, [10] considers lyingwith finite costs, [4] and [30] allow for transfers, and [7] considers probabilisticverification. Some computer scientists have looked at the trade-off betweenmonetary transfers and partial verifiability in terms of implementing socialchoice functions [12], [13]. Our setup belongs to the environment consideredby Green and Laffont and satisfies their “Nested Range Condition”. Thus,without loss of generality, we restrict attention to truthfully implementablemechanisms in our analysis.Some applied papers in this literature find optimal or efficient mechanismsin environments with a specific form of partial verifiability ([21], [20], [22],[8]). For instance, [23] argues using a model in which only some expenditure4an be hidden that spouses hiding income and assets from one another isefficient. [11] explains the complicated selling practices of real-world monop-olists by considering an economy where some agents have limited ability tomisrepresent their preferences. This paper considers the partial verifiabilityconstraint of no-overselling and potentially explains the use of cutoff mecha-nisms in settings where evidence is significantly easier to verify than to obtain.Our work is most closely related to [2], [26] and [1]. [2] and [26] consider aproblem where the principal has to choose one of N agents who prefer beingchosen and provide some private value to the principal from being chosen.In [2], the principal can verify his value from agent i at a cost c i , while [26]assumes ex-post verifiability so that the principal can penalize the winner bydestroying a certain fraction of the surplus. [1] considers a project delegationproblem in which the principal can verify characteristics of the chosen projectbut is uncertain about the set of available projects. These papers find theirrespective optimal mechanisms for the principal and call them the favoredagent mechanism [2], shortlisting procedure [26], and the threshold rule [1].While these mechanisms have some flavor of the cutoff mechanisms we obtainin this paper, there are important differences in the setup we consider here.Primarily, in their setups, the principal is empowered by ex-post verifiabilityof the reported values and the ability to use a prohibitively high punishmentto deter the agent from telling any lie, whereas in our setup, the agent isconstrained in that he cannot oversell, but the principal does not have thepower to directly deter the agent from underselling.The paper proceeds as follows. In section 2, we present the model anddefinitions. Section 3 contains all the results. In section 4, we discuss someother variants of the model and also compare our results with those fromsome related papers. Section 5 concludes. The more technical proofs are inthe appendix. 5 Model
There are two parties: a principal and an agent. The principal has to choosea single project from the set of available projects [ N ] = { , , . . . , N } . Eachproject i ∈ [ N ] leads to values ( p i , a i ) where p i denotes the profit for the prin-cipal and a i is the payoff to the agent. For all i ∈ [ N ], we assume p i ∼ F and a i ∼ G where both F and G have support in X = [0 ,
1] and that all p i s and a i sare independent. While the principal has the correct belief about the projectvalues, the agent actually knows the profit vector p = ( p , p , . . . , p N ) ∈ X N and the payoff vector a = ( a , a , . . . , a N ) ∈ X N for the N projects.Given this setup, the principal chooses a mechanism d : X N × X N → [ N ]which maps a profile of reported profits π = ( π , π , . . . , π N ) ∈ X N andpayoffs α = ( α , α , . . . , α N ) ∈ X N to a chosen project i ∈ [ N ]. The agent isconstrained so that he cannot report a project to be more profitable for theprincipal than it actually is. This is the partial verifiability constraint of nooverselling. Formally, the agent’s message space under any mechanism d is M ( p, a ) = { ( π, α ) ∈ ( X N ) : π i ≤ p i ∀ i ∈ [ N ] } If ( p, a ) is the project values and ( π, α ) is the report, the mechanism d leads to profit p d ( π,α ) for the principal and payoff a d ( π,α ) to the agent. Definition 1.
A mechanism d is incentive compatible if for any ( p, a ) and ( π, α ) ∈ M ( p, a ) a d ( p,a ) ≥ a d ( π,α ) . In words, d is incentive compatible if for any possible realization of profits6 and payoffs a , it is weakly optimal for the agent to report the true values(among reports available to the agent).Denote by IC the set of incentive compatible mechanisms . Then theprincipal’s problem is: max d ∈IC E [ p d ( p,a ) ] . We define a few special classes of mechanisms. The first is table mechanisms.
Definition 2.
A mechanism d is a table mechanism if there exist functions f , f , . . . , f N : X N → { , } such that • there exists some i such that f i ( p ) = 1 for all p ∈ X N , • for i = 1 , . . . , N , f i ( p ) is weakly increasing in p j for all j = 1 , . . . , N ,and such that • d ( p, a ) ∈ argmax i { a i : f i ( p ) = 1 } for all ( p, a ) ∈ ( X N ) . Intuitively, one may think of a table mechanism d as follows. The princi-pal commits to choosing the agent’s favorite project among those that satisfya criterion (those that are “on the table”). There is some default project thatis always on the table. Finally, the set of projects on the table is increasingin their profitability for the principal. That is, if the profits of all projectsare weakly higher, the set of projects on the table is weakly greater.For simplicity going forward, we will assume (without loss of generality)that f N ( p ) = 1 for all p ∈ X N . That is, in a table mechanism, project N isalways on the table. Note that it is without loss of generality to consider incentive compatible mechanisms.This is because our no-overselling constraint satisfies the Nested Range Condition ( θ ∈ M ( θ (cid:48) ) , θ (cid:48) ∈ M ( θ (cid:48)(cid:48) ) = ⇒ θ ∈ M ( θ (cid:48)(cid:48) )) which is both necessary and sufficient for therevelation principle to hold [15]. efinition 3. A mechanism d is a cutoff mechanism if it is a table mecha-nism and for i = 1 , . . . , N − , there exist cutoffs c i ∈ X , such that f i ( p ) = 1 if and only if p i ≥ c i . In a cutoff mechanism, a project is on the table if the principal’s profitfrom the project meets a threshold. That is, whether a project is on the tableor not depends only on that particular project’s profit value. The principalthen chooses the agent’s favorite project among those that meet the cutoffand the default.
Definition 4.
A mechanism d is a single cutoff mechanism if it is a tablemechanism and there is a c such that for i = 1 , . . . , N − , f i ( p ) = 1 if andonly if p i ≥ c . A single cutoff mechanism is a cutoff mechanism in which all cutoffs arethe same.
First, we characterize the set of incentive compatible mechanisms and showthat it is exactly the set of table mechanisms. Next, we show that for twoprojects, a cutoff mechanism is optimal. Based on this result, we conjecturethat a cutoff mechanism is optimal for N projects and consider the problemof finding the optimal cutoff mechanism and show that it is a single cutoff8echanism whose cutoff is given by c ( N ) = 1 − √ N + o (cid:18) √ N (cid:19) . In particular, we show that the expected profit of the principal tends to 1 as N tends to ∞ . We begin by giving a characterization of the set of incentive compatiblemechanisms IC . Theorem 1. d is incentive compatible if and only if it is a table mechanism. It is fairly straightforward from the definitions to check that table mecha-nisms are incentive compatible. Indeed, under a table mechanism, the agentprefers reporting higher π ’s to reporting lower ones and, given any report of π ’s, prefers reporting her payoffs to misreporting her payoffs (since such amisrepresentation can only lead the principal to make a choice which givesthe agent a lower payoff.) The other direction is more involved. Proof of Theorem 1.
Suppose d is a table mechanism. Consider any pro-file ( p, a ). If the agent reports some ( π, α ), we know that π ≤ p from theconstraint ( π, α ) ∈ M ( p, a ). Since the f i ’s are increasing, this implies that { i ∈ [ N ] : f i ( π ) = 1 } ⊂ { i ∈ [ N ] : f i ( p ) = 1 } Since the agent gets his preferred project among those available and report-ing truthfully maximizes his set of available projects, the agent cannot gain Recall that for functions f ( N ) , g ( N ), f ( N ) ∈ o ( g ( N )) if and only iflim N →∞ f ( N ) g ( N ) = 0 .
9y misreporting. Therefore, d is incentive compatible.Now, suppose that d is an incentive compatible mechanism. Definethe functions ( f i ) as follows: f i ( p ) = 1 if and only if there exists some( p (cid:48) , a (cid:48) ) ∈ ( X N ) with p (cid:48) ≤ p such that d ( p (cid:48) , a (cid:48) ) = i . Now, we want to showthat these functions satisfy the properties of the table mechanism. There issome k ∈ [ N ] such that the decision at a = p = (0 , , ... ) is k . Hence, bydefinition, f k ( p ) = 1 for all p . Now, consider any j and p (cid:48) ≥ p and suppose f j ( p ) = 1. This implies that there exists some ( p (cid:48)(cid:48) , a (cid:48)(cid:48) ) ∈ ( X N ) with p (cid:48)(cid:48) ≤ p such that d ( p (cid:48)(cid:48) , a (cid:48)(cid:48) ) = j . Since p (cid:48)(cid:48) ≤ p ≤ p (cid:48) , by definition f j ( p (cid:48) ) = 1, whichimplies that f is increasing.Now, consider any profile ( p, a ). We want to show that d ( p, a ) ∈ argmax i { a i : f i ( p ) = 1 } . Suppose towards a contradiction that d ( p, a ) is not in this set. Bydefinition, d ( p, a ) ∈ { i : f i ( p ) = 1 } . Thus, there exist j, k ∈ { i : f i ( p ) = 1 } such that a j > a k and d ( p, a ) = k . But then, there exist a (cid:48) and p (cid:48) ≤ p suchthat d ( p (cid:48) , a (cid:48) ) = j . Therefore, the agent has a profitable deviation ( p (cid:48) , a (cid:48) ),which contradicts the fact that d is incentive compatible. Therefore, it mustbe that d ( p, a ) ∈ argmax i { a i : f i ( p ) = 1 } . Thus, d is a table mechanism. With this characterization of the incentive compatible mechanisms, we focuson finding the optimal table mechanism. Here and for the remainder of thepaper, we will assume that p i , a i ∼ U ([0 , i . Theorem 2.
For N = 2 , the optimal incentive compatible mechanism is acutoff mechanism.Proof. Suppose d is incentive compatible. From Theorem 1, we know that d is a table mechanism. So f ( p ) = 1 for all p ∈ X . Define c = sup { p : f ( p , p ) = 0 } . For any arbitrary ( p , p ) ∈ X , we show that the cutoff10echanism d (cid:48) with c = c is at least as good for the principal as the givenmechanism d . Consider the following (exhaustive and mutually exclusive)cases: • p ∈ [0 , c ) , p ∈ [0 , c ): Under both d and d (cid:48) , f ( p ) = 0 and therefore, thesecond project is chosen for all such profiles. Thus, the two mechanismsare identical for such profiles. • p ∈ [0 , c ) , p ∈ [ c, d (cid:48) , f ( p ) = 0 and therefore, project 2 ischosen. Under d , the principal gets either p or p + p which are bothat most p . Thus, p d (cid:48) ≥ p d for these profiles. • p ∈ [ c, , p ∈ [0 , c ): Under d (cid:48) , f ( p ) = 1 and since project 2 isalways available, the principal gets p with probability and p withprobability . Under d , the principal gets either p or p + p which areboth at most p + p . Thus, p d (cid:48) ≥ p d for these profiles. • p ∈ [ c, , p ∈ [ c, d and d (cid:48) , f ( p ) = 1 and therefore, thesame project is chosen for all such profiles. Thus, the two mechanismsare identical in this set.This shows that for any IC mechanism that is not a cutoff mechanism,there is a cutoff mechanism under which the principal’s expected profit isstrictly higher. Hence, any optimal IC mechanism must be a cutoff mecha-nism. And indeed, since the principal’s expected profit from cutoff mecha-nisms is continuous in the cutoff and the set of possible cutoffs is compact,at least one cutoff mechanism is optimal.Motivated by the result for two projects, we conjecture that a cutoffmechanism is optimal for an arbitrary number of projects and consider theproblem of finding the optimal cutoff mechanism.11he following theorem gives the optimal table mechanism in the class ofcutoff mechanisms. Theorem 3.
For general N , the optimal cutoff mechanism has a single cutoff c ( N ) that is defined by the equation N (1 − c )(1 − c + c N ) = 1 − c N . The principal’s expected utility from the corresponding optimal cutoff mech-anism is given by
12 + c ( N )2 ( c ( N ) − c ( N ) N )While it is hard to obtain an exact closed form solution for the optimalsingle cutoff, we show that it has the following asymptotic property. Lemma 4.
The optimal single cutoff c ( N ) = 1 − √ N + o (cid:16) √ N (cid:17) . Proof.
Let φ N ( c ) = N (1 − c )(1 − c + c N ) − (1 − c N ) . Then for any α >
0, lim N →∞ φ N (cid:18) − α √ N (cid:19) = α − , and so for all sufficiently large N , the quantity φ N (cid:18) − α √ N (cid:19) is positive if and only if α > α <
1. Hence, forany (cid:15) >
0, it follows that the unique root c ( N ) of the equation from Theorem3 satisfies (1 − (cid:15) ) 1 √ N ≤ − c ( N ) ≤ (1 + (cid:15) ) 1 √ N .
12t follows from Lemma 4 that as N → ∞ , the optimal single cutoff c ( N ) → →
1. The following graphdepicts the optimal cutoff for different number of projects.Together, Theorems 2 and 3 give the solution to the principal’s problemfor the case of two projects.
Corollary 5.
For N = 2 , the optimal incentive compatible mechanism is acutoff mechanism with c = . In this section, we discuss how our results compare to some other resultsin the literature and also study some simple and important variants of our13odel. We begin by discussing some useful interpretations of our model andresult. We then turn to comparing our model with variants in which (i) theagent’s message space is state-independent, (ii) the principal can incentivizethe agent with message-contingent transfers, and (iii) the agent must sequen-tially report the project values and there is no recall.
The simplest implementation of the conjecturally optimal mechanism has anice delegation interpretation. The principal selects a cutoff profit and adefault project and delegates the project choice to the agent, in the sensethat the agent can select either the default project or a project which meetsthe cutoff profit, and the principal signs off on the final decision. Under thisdelegation, the agent chooses his favorite project among those that meet thecutoff and the default project. Note in particular that this implementationonly requires the agent to report information about the chosen project. Thisis outcome-equivalent to the cutoff mechanism.We note that many instances of “cutoff mechanisms” with flavors similarto ours have appeared in the literature, but the optimality of such mech-anisms has been driven by the assumption of ex-post verifiability ([2], [26][1]). In particular, in most previous models the principal’s ability to punishin the case of a misreport is tantamount to the assumption that the agentcannot lie. Here, we offer an alternative way of rationalizing such cutoffmechanisms via the no-overselling (or more generally, interim partial verifia-bility) constraint, which alters the agents incentives but not by threateningthe agent in the case of a misreport. To help elucidate this point, we makethe following observation. If our model were altered so that the agent hadan unconstrained message space, but the principal were required to take thedefault project in case the agent should oversell any of the projects, then all14f our results would carry over.
To investigate the impact of the partial verifiability constraint on the princi-pal’s problem, we examine the principal’s problem without the no-oversellingconstraint and compare the results to the ones obtained above. In this set-ting, the principal’s problem is again to find an incentive compatible mech-anism d : X N → [ N ] that maximizes his expected payoff. However, theincentive constraints on a mechanism are now stronger. Definition 5.
A mechanism d is incentive compatible if for any ( p, a ) and ( π, α ) ∈ ( X N ) a d ( p,a ) ≥ a d ( π,α ) . The following lemma characterizes the set of incentive compatible mech-anisms in this setting.
Lemma 6.
Without the no-overselling constraint, the following are equiva-lent • d is incentive compatible • there exists S ⊂ [ N ] with S (cid:54) = ∅ such that for all ( p, a ) ∈ X N d ( p, a ) = argmax i { a i : i ∈ S }• d is a table mechanism such that for all i ∈ Nf i ( p ) = 0 for all p ∈ X N or f i ( p ) = 1 for all p ∈ X N Observe that for any incentive compatible mechanism, the decision atany profile ( p, a ) is independent of p . That is, in an IC mechanism when the15gent can oversell, the choice of the project cannot depend on the profit itgenerates for the principal. Lemma 7.
Without the no-overselling constraint, the principal’s expectedpayoff from any incentive compatible mechanism is E [ p i ] .Proof. The result follows from the above discussion and also the assumptionthat the agent’s and principal’s values are iid.In particular, when the principal’s values are drawn uniformly from [0 , for all N . That is, he can do no better by incen-tivizing the agent than by randomly choosing one of the projects. On theother hand, when the agent cannot oversell, the principal can get an expectedprofit that is greater than E ( p i ) for any number of projects and increases to1 as N → ∞ . Thus, the principal can obtain a significant gain from theno-overselling constraint. To compare the power that partial verifiability confers on the principal withthe power that the ability to make transfers does, we examine the prin-cipal’s problem when there is no partial verifiability but the principal canuse transfers to incentivize the agent, and we compare the results to theones obtained above. With transfers, a mechanism is defined by ( d, t ) where d : ( X N ) → [ N ] denotes the project choice as before, and t : ( X N ) → R denotes the transfers from the principal to the agent. Note that we assumeex-post verifiability of the profit values p d ( p,a ) once the project has been cho-sen.Consider the following mechanism: d ( p, a ) = argmax i ∈ [ N ] ( p i + a i )16nd t ( π, α ) = p d ( π,α ) − E [max i ∈ [ N ] ( p i + a i )]Under this mechanism, the principal essentially sells the firm to theagent. The principal charges the expected surplus under the efficient out-come E [max i ∈ [ N ] ( p i + a i )] as a fixed cost which does not affect the agent’sincentives. For any ( p, a ), the agent’s utility on reporting ( π, α ) will be a d ( π,α ) + p d ( π,α ) − E [max i ∈ [ N ] ( p i + a i )] and this is maximized by reporting( p, a ). The agent’s expected payoff is then 0. Thus, this mechanism is effi-cient and optimal for the principal in the presence of transfers. Thus, whilethe no-overselling constraint significantly empowers the principal, the gainsfrom the ability to make transfers are even higher. Note that incentive com-patibility of the above mechanism relies on ex-post verifiability. However,the mechanism with the same d as above and transfers given by t ( π, α ) = π d ( π,α ) − E [max i ∈ [ N ] ( p i + a i )]is outcome equivalent to the above mechanism and implementable if we allowfor transfers and assume no-overselling instead of ex-post verifiability. In order to understand environments in which information about the projectsis obtained sequentially, we investigate a dynamic version of the principal’sproblem in which there is no recall. In this setting, the agent observes theprofit p i and payoff a i for project i and must report these values to theprincipal. The principal must then decide whether to select project i orpermanently abandon it. If the principal decides to abandon project i , theagent then observes p i +1 and a i +1 , and so on. In comparison to the staticsetup studied earlier, the dynamics without recall setup empowers the princi-17al since the incentive constraints are weaker and also weakens him becausethere is no recall. The question is whether the net effect is positive or neg-ative. In this section, we find the optimal mechanism under this setup andcompare the expected profits with those from the optimal cutoff mechanismin the static setup. Lemma 8.
In the dynamics without recall setup, the optimal incentive com-patible mechanism is such that a project i is accepted if and only if p i ≥ c i and a i ≥ c i and for all j < i , either p j < c j or a j < c j , where { c i } Ni =2 satisfies c i − = c i + 12 (1 − c i ) with c N = 0 . The following plot demonstrates that the principal’s profit from the op-timal cutoff mechanism in the static setup is (just barely) higher than thatfrom the optimal dynamic mechanism.18hile we considered some simple extensions in this section, there aremany interesting problems that remain unanswered. For instance, how muchdoes the principal gain in a dynamic model where the agent has to sequen-tially report the project values and there is recall? Note that it is without lossof generality to assume that the agent reports all the project values. This fol-lows from the observation that any mechanism where the principal chooses aproject after learning only i project values is equivalent to a mechanism wherethe principal learns all project values from the agent but decides based onthe first i project values. While the set of incentive compatible mechanismsin the dynamic setup is strictly bigger than the set of table mechanisms, itremains open whether there is a mechanism that generates higher expectedprofit. Also, many of these results likely extend (with minor modifications)to more general distributions–we leave this extension as future work.19 Conclusion
In a principal agent problem of project selection with the partial verifiabil-ity constraint of no-overselling, we find a neat characterization of the setof incentive compatible mechanisms. We show that for two projects, a cut-off mechanism is optimal. Based on this result, we conjecture that a cutoffmechanism is optimal for any number of projects. We demonstrate that inthe class of cutoff mechanisms, there is a unique optimal mechanism andthat this mechanism is a single cutoff mechanism. We show that the cutoffcorresponding to this mechanism increases with the number of projects, andthat the principal’s expected profit under the optimal mechanism convergesto the upper bound of the support of the profit distribution (that is, themaximum possible profit) as the number of projects tends to ∞ .On the contrary, we find that when the agent can oversell, the principalcan do no better than choosing a project randomly. However, in cases wherethe principal can use transfers and there is ex-post verifiability, he can im-plement the efficient mechanism and extract the entire surplus by essentiallyselling the firm. Thus, we conclude that in settings where transfers are notfeasible, there are significant gains to be had for the principal from the no-overselling constraint. 20 eferences [1] Mark Armstrong and John Vickers, A model of delegated project choice , Econometrica (2010), no. 1, 213–244.[2] Elchanan Ben-Porath, Eddie Dekel, and Barton L Lipman, Optimal allocation withcostly verification , American Economic Review (2014), no. 12, 3779–3813.[3] ,
Mechanisms with evidence: Commitment and robustness , Econometrica (2019), no. 2, 529–566.[4] Elchanan Ben-Porath and Barton L Lipman, Implementation with partial provability ,Journal of Economic Theory (2012), no. 5, 1689–1724.[5] Jesse Bull and Joel Watson,
Evidence disclosure and verifiability , Journal of EconomicTheory (2004), no. 1, 1–31.[6] ,
Hard evidence and mechanism design , Games and Economic Behavior (2007), no. 1, 75–93.[7] Ioannis Caragiannis, Edith Elkind, Mario Szegedy, and Lan Yu, Mechanism design:from partial to probabilistic verification , Proceedings of the 13th acm conference onelectronic commerce, 2012, pp. 266–283.[8] Gorkem Celik,
Mechanism design with weaker incentive compatibility constraints ,Games and Economic Behavior (2006), no. 1, 37–44.[9] Partha Dasgupta, Peter Hammond, and Eric Maskin, The implementation of socialchoice rules: Some general results on incentive compatibility , The Review of EconomicStudies (1979), no. 2, 185–216.[10] Raymond Deneckere and Sergei Severinov, Mechanism design with partial state veri-fiability , Games and Economic Behavior (2008), no. 2, 487–513.[11] Raymond J Deneckere and Sergei Severinov, Mechanism design and communicationcosts , USC CLEO Research Paper
C02-8 (2001).[12] Diodato Ferraioli, Paolo Serafino, and Carmine Ventre,
What to verify for optimaltruthful mechanisms without money , Proceedings of the 2016 international conferenceon autonomous agents & multiagent systems, 2016, pp. 68–76.[13] Dimitris Fotakis, Piotr Krysta, and Carmine Ventre,
Combinatorial auctions withoutmoney , Algorithmica (2017), no. 3, 756–785.
14] Allan Gibbard et al.,
Manipulation of voting schemes: a general result , Econometrica (1973), no. 4, 587–601.[15] Jerry R Green and Jean-Jacques Laffont, Partially verifiable information and mech-anism design , The Review of Economic Studies (1986), no. 3, 447–456.[16] Sergiu Hart, Ilan Kremer, and Motty Perry, Evidence games: Truth and commitment ,American Economic Review (2017), no. 3, 690–713.[17] Matthew O Jackson and Xu Tan,
Deliberation, disclosure of information, and voting ,Journal of Economic Theory (2013), no. 1, 2–30.[18] Navin Kartik,
Strategic communication with lying costs , The Review of EconomicStudies (2009), no. 4, 1359–1395.[19] Alison Kitson, Gill Harvey, and Brendan McCormack, Enabling the implementationof evidence based practice: a conceptual framework. , BMJ Quality & Safety (1998),no. 3, 149–158.[20] Jeffrey M Lacker and John A Weinberg, Optimal contracts under costly state falsifi-cation , Journal of Political Economy (1989), no. 6, 1345–1363.[21] Giovanni Maggi and Andr´es Rodriguez-Clare, Costly distortion of information inagency problems , The RAND Journal of Economics (1995), 675–689.[22] John Moore,
Global incentive constraints in auction design , Econometrica: Journal ofthe Econometric Society (1984), 1523–1535.[23] Alistair Munro et al.,
Hide and seek: A theory of efficient income hiding within thehousehold , National Graduate Institute for Policy Studies, 2014.[24] Roger B Myerson,
Optimal auction design , Mathematics of operations research (1981), no. 1, 58–73.[25] Roger B Myerson and Mark A Satterthwaite, Efficient mechanisms for bilateral trad-ing , Journal of economic theory (1983), no. 2, 265–281.[26] Tymofiy Mylovanov and Andriy Zapechelnyuk, Optimal allocation with ex post ver-ification and limited penalties , American Economic Review (2017), no. 9, 2666–94.[27] Mark Allen Satterthwaite,
Strategy-proofness and arrow’s conditions: Existence andcorrespondence theorems for voting procedures and social welfare functions , Journalof economic theory (1975), no. 2, 187–217.
28] Sergei Severinov and Raymond Deneckere,
Screening when some agents are nonstrate-gic: does a monopoly need to exclude? , The RAND Journal of Economics (2006),no. 4, 816–840.[29] Itai Sher and Rakesh Vohra, Price discrimination through communication , TheoreticalEconomics (2015), no. 2, 597–648.[30] Nirvikar Singh and Donald Wittman, Implementation with partial verification , Reviewof Economic Design (2001), no. 1, 63–84. Appendix
Proof of Theorem 3.
Note that since we are maximizing a continuous func-tion over a compact space, a solution exists. We proceed in three steps. Instep 1, we show that in any cutoff mechanism which has a cutoff that is 0or 1 cannot be optimal. That is, the solution must be interior. In step 2,we show that a cutoff mechanism with different cutoffs cannot be optimal.Finally, we maximize the principal’s utility with respect to the single cutoff c to get the optimal cutoff mechanism.For any arbitrary cutoff mechanism with cutoffs c = ( c , c ...c N − ), wecompute the expected utility of the principal. For the below expressions,consider c N = 0. EU p ( c ) = N (cid:88) i =1 (1 + c i )2 P ( d = i )Note that P ( d = i ) = (1 − c i ) P ( d = N | c i = 0). That is, conditional on p i ≥ c i , the probability that the decision is i is the same as the probabilitythat the decision is N when the cutoff c i = 0 and the remaining cutoffsare the same. To find the probability that the last project N is chosen, wecondition on its rank which is defined in terms of a i s. That is, the rank of N is k if there are exactly k − a i s. P ( d = N ) = N (cid:88) k =1 P (rank of N = k ) P ( d = N | rank of N = k )= 1 N N (cid:88) k =1 P ( d = N | rank of N = k )24 1 N N (cid:88) k =1 (cid:88) S ⊂ [ N − | S | = k − Π i ∈ S c i (cid:0) N − k − (cid:1) We first argue that cutoffs have to be interior in the optimal cutoff mech-anism. Suppose d is a cutoff mechanism with cutoffs c = ( c , c ...c N − ) and c i = 1. In this case, let u ∗ denote the expected utility of the principal. Notethat u ∗ <
1. Define a new cutoff mechanism in which all cutoffs remain thesame except for i which now has the cutoff u ∗ . Now, with probability u ∗ ,the principal gets u ∗ and with probability 1 − u ∗ , the principal gets a convexcombination of u ∗ and u ∗ > u ∗ . This means that his expected payoff underthe new mechanism is > u ∗ . Therefore, an optimal mechanism cannot havethe cutoff 1. Now, suppose there is an i ∈ [ N −
1] which has a cutoff of0. Observe that the expected utility from any arbitrary project conditionalon being chosen is c ≥ .
5. Now, consider increasing the cutoff to c i = in the new mechanism while keeping every other cutoff the same. For every p i < , under the old mechanism, the principal’s payoff was some convexcombination of p i and some k ≥ /
2. Under the new mechanism, it is just k > p i . For p i ≥ , the new mechanism is identical to the old one. Therefore,an optimal mechanism cannot have a cutoff of 0. Thus, we know that in theoptimal cutoff mechanism, c i / ∈ { , } for any i ∈ [ N − d is such that there exist i, j with c i > c j . Define t so that ¯ c + t = c i and ¯ c − t = c j . From the above cal-culations, we know that if we write EU p ( c ) in expanded form and plug in c i = ¯ c + t and c j = ¯ c − t , we get a polynomial that is at most cubic in t .This is because we get a term that is at most quadratic in t for P ( d = k ) forany k ∈ [ n ] and in the expected utility calculation, we multiply that with c k . Note that by the symmetry of the projects, the principal should getthe same expected utility if we changed t to − t . Therefore, the polynomial25hould be of the form at + b . Now, if a is > <
0, the principal gains fromincreasing or decreasing t which is possible since we know that the solutionis interior and c i > c j . Therefore, d cannot be optimal in either case. When a = 0, the principal is indifferent to increasing t till one of the cutoffs reachesan extreme of 1 or 0 which we know cannot be optimal. Therefore, a cutoffmechanism with different cutoffs cannot be optimal.The above discussion implies that the solution to the optimization prob-lem has to be a single cutoff mechanism. Let c be the single cutoff. Usingthe above calculations, we have that P ( d = N ) = 1 N N (cid:88) k =1 (cid:88) S ⊂ [ N − | S | = k − Π i ∈ S c i (cid:0) N − k − (cid:1) = 1 N N (cid:88) k =1 c k − = 1 N − c N − c Therefore, EU p ( c ) = 12 P ( d = N ) + 1 + c P ( d (cid:54) = N )= 12 + c (cid:18) − − c N N (1 − c ) (cid:19) Differentiating with respect to c gives the desired optimal cutoff mecha-nism defined by single cutoff c ( N ). 26 EU p ( c ) ∂c = 12 (cid:18) − − c N N (1 − c ) (cid:19) − c N (cid:18) (1 − c )( − N c N − ) + (1 − c N )(1 − c ) (cid:19) = 12 + c N − c ) − − c N N (1 − c ) Setting it equal to zero gives us: N (1 − c )(1 − c + c N ) = 1 − c N Plugging in the expected utility expression gives us the maximum utilityof the principal in the class of cutoff mechanisms: + c ( N )2 ( c ( N ) − c ( N ) N ). Proof of Lemma 6.
First, we establish the equivalence of the first 2 state-ments.Suppose d is such that there exists S ⊂ [ N ] and d ( p, a ) = arg max i { a i : i ∈ S } . This mechanism is clearly incentive compatible because the agent isalways getting his favorite project from his option set S which is the rangeof the mechanism.Suppose d is an incentive compatible mechanism. Let S = { i ∈ N :there exists ( p, a ) ∈ X N with d ( p, a ) = i } be the range of the mechanism.Note that at any profile ( p, a ), the agent can implement any project i ∈ S byreporting the appropriate ( π, α ). To incentivize truth-telling at ( p, a ), it hasto be the case that the agent’s favorite project is chosen from S. Formally,for any ( p, a ) d ( p, a ) = arg max i { a i : i ∈ S } Thus, any incentive compatible mechanism has to be of the form describedin the 2nd statement.The equivalence of the 2nd and 3rd statement follows in a straightforward27anner by letting S = { i ∈ [ N ] : f i ( p ) = 1 for all p ∈ X N } Proof of Lemma 8.
We find the optimal mechanism in this setup by back-ward induction. When the agent investigates project N −
1, both the principaland the agent know that project N will be chosen if project N − N − N . As a result, in the optimalincentive compatible mechanism, the principal accepts project N − p N − ≥ and a N − ≥ . Similarly, they would want project N − i − c i − is equal to the expected profitupon rejecting project i −