BBayesian Elicitation
Mark Whitmeyer ∗ January 28, 2020
Abstract
How can a receiver design an information structure in order to elicit informationfrom a sender? We study how a decision-maker can acquire more information froman agent by reducing her own ability to observe what the agent transmits. Intuitively,when the two parties’ preferences are not perfectly aligned, this garbling relaxes thesender’s concern that the receiver will use her information to the sender’s disadvantage.We characterize the optimal information structure for the receiver. The main result isthat under broad conditions, the receiver can do just as well as if she could committo a rule mapping the sender’s message to actions: information design is just as goodas full commitment. Similarly, we show that these conditions guarantee that ex ante information acquisition always benefits the receiver, even though this learning mightactually lower the receiver’s expected payoff in the absence of garbling. We illustratethese effects in a range of economically relevant examples.
Keywords:
Costly Signaling, Cheap Talk, Information Design, Bayesian Persuasion
JEL Classifications:
C72; D82; D83 ∗ Department of Economics, University of Texas at AustinEmail: [email protected]. Thanks to V. Bhaskar, Gleb Domnenko, William Fuchs, RosemaryHopcroft, Vasudha Jain, Meg Meyer, Vasiliki Skreta, Max Stinchcombe, Yiman Sun, Tymon Tatur, Alex Teytel-boym, Thomas Wiseman, and Joseph Whitmeyer. Thanks also to various seminar and conference audiences.This paper is a shorter version of the job market paper of the same name. a r X i v : . [ ec on . T H ] J a n ontents Ex Ante
Learning 285 Examples 346 Discussion 44A Section 3 Proofs 48B Section 4 Proofs 53C Section 5 (Examples) Derivations 54
In that open fieldIf you do not come too close, if you do not come too close,On a summer midnight, you can hear the music... T.S. Eliot
East Coker
There is a decision-maker faced with choosing an action in an uncertain world. She doesnot have direct access to information about the state of the world, but there is a secondperson who does. The second person (or sender) observes the state (equivalently, his type)before sending a message , which the decision-maker (or receiver) observes before taking an We term the sender’s action a message in order to distinguish it from the receiver’s action. In some settings,like for instance cheap talk games, the moniker is fitting. In others, like for instance the Spence (1978) [31] modelof signaling through education attainment, where the sender chooses his level of education, labeling his action values only the information content of the message, and so nat-urally her welfare increases as the sender’s messages become more informative. However,there are a number of potential frictions that could impede this information transmission.First, the sender’s and the receiver’s preferences over the action taken may be imperfectlyaligned. Second, the messages may be costly to the sender, which costs affect the messagesthat are chosen in equilibrium. As a result, less than full information may be transmitted atequilibrium, and the frictions may even be so severe that no information is transmitted.Now, suppose that the receiver is not forced to observe the sender’s message directly,but may commit ex ante to observe a noisy signal of the message instead. That is, supposethe receiver may choose the degree of transparency, or information structure, in the game.Can less than full transparency help the receiver? Moreover, if less than full transparency isbeneficial, then what does the (receiver-)optimal degree of transparency look like, and howcan the receiver solve the problem of designing the optimal information structure?If this were a decision problem in which the information were exogenous, i.e. if there wereno sender and instead the message sent to the receiver about the state were from nature, thenthe answer to the first question would clearly be no. Namely, if the message were exogenous,then the receiver would always (at least weakly) prefer to observe the message itself ratherthan some noisy signal. Conversely, here, the message is not exogenous, but is instead anequilibrium choice of the sender. Crucially, the sender is aware of the degree of transparency,which thus affects the message, or distribution over messages, that he sends at equilibrium.There is an important trade-off present in the receiver’s choice of information structure. a message is less appropriate. Throughout, to simplify the language, we arbitrarily impose that the receiver is female and the sender ismale.
3y choosing a more informative signal of the message, the receiver obtains more informationfor any fixed strategy vector of the sender types. On the other hand, the vector of strategieschosen by the sender types, and hence the information, is an endogenous choice of the senderthat he makes cognizant of the information structure. Thus, less transparency may beget amore informative vector of strategies at equilibrium. The optimal degree of transparencyarises as a consequence of these trade-offs, and it may strictly benefit the receiver to choosea less informative signal of the sender’s message. In short, less than full transparency canstrictly benefit the receiver.The following example illustrates this observation. Sender and receiver are employees ofa firm: the receiver is part of upper management, the Chief Operations Officer (COO), say;and the sender is a (local) branch manager. The COO is contemplating whether to close thebranch (action 𝐶 ) or not (action 𝑂 ). The branch is either viable (state 𝜃 𝐺 ) or not (state 𝜃 𝐵 ) andthe COO would like to close the branch if the state is 𝜃 𝐵 and keep the branch open otherwise.Explicitly, the COO’s payoffs are 𝑢 𝑅 (𝑂, 𝜃 𝐺 ) = 𝑢 𝑅 (𝐶, 𝜃 𝐵 ) = 1, and 𝑢 𝑅 (𝑂, 𝜃 𝐵 ) = 𝑢 𝑅 (𝐶, 𝜃 𝐺 ) = 0 The COO is unable to observe directly the viability of the branch. Instead, the branchmanager is “on the ground" and observes the state. The branch manager must make aninvestment decision, whether to buy new equipment, say; and all else equal, the branchmanager would prefer to invest (message 𝐼 ) if and only if the state is 𝜃 𝐺 . On the other handthe branch manager would prefer the branch be kept open no matter the state; and, crucially,the payoffs are such that the branch manager would rather choose the incorrect investmentdecision for the state and have the branch be kept open than choose the correct investmentlevel and have the branch be closed. The state-dependent utilities for the branch managerfor each message, action combination are (𝐼 , 𝑂) (𝐼 , 𝐶) (𝑁 , 𝐶) (𝑁 , 𝑂)𝜃 𝐺 𝐵 Both COO and branch manager share the common prior 𝜇 ∶= Pr (Θ = 𝜃 𝐺 ) = 2/3 : thebranch is more likely to be viable than not. 4igure 1: Whether to Close the Branch, Full TransparencyThe full transparency scenario is depicted in Figure 1. There, no equilibria exist in whichany information is transmitted. The only equilibria are pooling–those in which the branchmanager chooses the same message no matter the state–and so the COO’s posterior is thesame as her prior. The logic behind this is simple: there can be no equilibrium in which both 𝐼 and 𝑁 are sent in such a way that the COO strictly prefers to take a different action aftereach message. In such a circumstance, the local manager who knows the state is 𝜃 𝐵 alwaysprefers to deviate to the message that is followed by the branch being kept open ( 𝑂 ). TheCOO’s payoff is .Suppose we introduce a neutral third party, a middle manager, say, who oversees thebranch manager. The COO no longer observes the message of the branch manager; instead,the middle manager witnesses the message before communicating to the COO. Because themiddle manager is neutral, we may model him as a signal, a stochastic map 𝜋 ∶ 𝑀 → Δ(𝑋 ) where 𝑋 is some (finite) set of signal realizations. For any message 𝑚 sent by the branchmanager, the middle manager sends signal realization 𝑥 to the COO with probability 𝜋 (𝑥|𝑚) .One possible signal is one that is completely uninformative and involves just one signalrealization, the statement (everything is ) 𝑓 𝑖𝑛𝑒 –no matter what message the sender chooses,the middle manager says that “everything is fine." Formally, the set of signal realizations is 𝑋 = {𝑓 𝑖𝑛𝑒} , and 𝜋 (𝑓 𝑖𝑛𝑒|𝐼 ) = 𝜋 (𝑓 𝑖𝑛𝑒|𝑁 ) = 1
We term this signal no transparency, and this scenario is depicted in Figure 2. As in5igure 2: Whether to Close the Branch, No Transparencythe full transparency case, no information is transmitted at equilibrium, but now the branchmanagers separate and send different messages. In particular, the branch manager in the lowstate does not invest, since he knows that the receiver will not be able to identify him fromthis choice due to the garbling by the signal. As in the full transparency case, the COO’spayoff is .No transparency is too extreme, and the COO can increase her payoff by choosing amore moderately uninformative signal. Indeed, the optimal signal is one in which the middlemanager sends one of two signal realizations;
𝑋 = {𝑓 𝑖𝑛𝑒, 𝑏𝑎𝑑} . If the branch manager investsthen the middle manager always sends fine , and if he doesn’t invest then the middle managersends fine half of the time and bad the other half of the time. Formally, the optimal signal is 𝜋 (𝑏𝑎𝑑|𝑁 ) = 12 , 𝜋 (𝑏𝑎𝑑|𝐼 ) = 0𝜋 (𝑓 𝑖𝑛𝑒|𝑁 ) = 12 , 𝜋 (𝑓 𝑖𝑛𝑒|𝐼 ) = 1
The optimal signal is depicted in Figure 3. After 𝑓 𝑖𝑛𝑒 the COO keeps the branch open(using Bayes’ law she believes that the branch is viable with probability ), and after 𝑏𝑎𝑑 shecloses the branch (she is certain that the branch is nonviable). The signal begets a separatingequilibrium–one in which local manager chooses not to invest if the branch is not viable andchooses to invest if it is. This gain in informativeness outweighs the garbling by the signal,and the COO obtains a strictly higher equilibrium payoff than with full transparency ( versus ).The signal is pinned down by the incentives of the branch manager in state 𝜃 𝐵 . The6igure 3: Whether to Close the Branch, Optimal TransparencyCOO, in designing the information structure, gives local management just enough enoughincentive to be willing to separate. She is able to minimize the amount of time she choosesthe “wrongâĂİ action in relation to the state, and obtains more information due to the branchmanager’s willingness to separate.The COO benefits by not having direct oversight of the branch manager and insteadacquiring information through an intermediary. As stated above, the third party could beinternal, part of the “chain of command" within the company, or it could be an externalauditor. If the COO could directly observe the branch manager’s decision, she would notbe able to credibly commit to not exploiting the information provided by a branch manager.Thus, with full transparency, she would obtain no information. She would benefit if insteadshe had a go-between who observed the branch manager’s decision before providing herwith a recommendation.Throughout this paper, we explore the problem of solving for the receiver-optimal de-gree of transparency in communication games. We restrict attention to the environment inwhich the sender and receiver share a common prior, and allow for arbitrary state-dependentpreferences for the sender and receiver over both the sender’s message and the receiver’s ac-tion. For most of this paper we focus on what we term simple signaling games, which arethose in which the receiver has no intrinsic preferences over the message chosen by thesender. That is, they are games in which the message chosen by the sender is not an argu- We focus throughout on the receiver-optimal Perfect Bayesian Equilibrium. Henceforth, by equilibriumwe refer to the PBE that is best for the receiver. commitment solutionfor the receiver, which corresponds to the scenario in which the receiver can commit to a dis-tribution over actions conditioned on the sender’s choice of message. This is just the optimaltransparency problem with the obedience (sequential rationality) constraints relaxed. Hence,if the commitment strategy that maximizes the receiver’s payoff also satisfies the receiver’sobedience constraints, then it must correspond to the optimal information structure. Thecommitment problem is much simpler to solve, and we establish that it reduces to a simplelinear program.In the main result of this paper, Theorem 3.3, we establish that in any simple signal-ing game with two actions, “Opacity equals CommitmentâĂİ. Namely, for any number ofstates and messages, provided the receiver has at most two actions, the optimal commitmentsolution satisfies the receiver’s obedience constraints. Thus, to solve the information de-sign problem we need only search for a solution to the much simpler commitment problem.Furthermore, such results need not be limited merely to simple signaling games. We alsoestablish sufficient conditions that guarantee that in non-simple signaling games with tworeceiver actions the receiver’s optimal transparency solution is equivalent to the commitmentsolution. Moreover, these conditions are quite natural and hold for a variety of settings, in-cluding a paradigmatic game in biology, the Sir Philip Sidney game , and a political setting,which we explore in Section 5.2. Optimal transparency in this game is the subject of Whitmeyer (2019) [34]. ex ante learning is always beneficial. To put another way, supposethat the receiver may, prior to the communication game, acquire public information in theform of the realization of some Blackwell experiment (Blackwell 1951 [4]). It is obvious thatsome experiments benefit the receiver, e.g. a fully informative experiment always does; butis it true that any experiment benefits the receiver?Surprisingly, even in simple games with just two actions ex ante information acquisitionmay hurt the receiver if she cannot choose the information structure in the ensuing game.However, as we find in this paper, in two action simple games, if the receiver may choose9he optimal information structure, then information acquisition is always beneficial. Thus,in the class of binary action simple communication games, the ability to choose the optimaldegree of transparency ensures that the value of information is always positive, even thoughwith full transparency it may not be.All-in-all, we discover that information design is remarkably powerful. In two-actionsimple communication games, it allows the receiver to obtain a payoff as high as that whichshe could achieve with commitment. Moreover, it guarantees that ex ante information canonly help her, even though without information design it may harm.Section 2 presents the formal model, Section 3 explores the connection between thereceiver-optimal information structure and commitment, and Section 4 explores the pos-sible benefits and drawbacks to ex ante learning. Section 5 illustrates applications of theseideas in finance/accounting, political economy, and the academic job market in economics;and Section 6 concludes.
The past decade has seen a rapid proliferation of works that explore the underlying ideasfrom Kamenica and Gentzkow (2011) [20] in different ways. In that paper, as well as in manyof the works that followed, the underlying information is exogenous, and the persuader’s(or persuaders’) problem is how to transmit this information in a particular way in order toinduce one or many receivers to take actions favorable (or at least as favorable as possible) tothe persuader(s). In some sense; then, this paper can be viewed as the inverse of the BayesianPersuasion problem. Instead of a sender aiming to persuade a receiver; here, a receiver seeksto elicit information from a sender–hence, “Bayesian Elicitation".More recently, commencing with Boleslavsky and Kim (2017) [8], the literature has ex-plored the situation in which the information is endogenous (in fact, in Boleslavsky andKim, the state itself is endogenous). That is, there is now some information generation pro-cess, which is itself affected by the information structure or signal chosen by the persuader.In Boleslavsky and Kim, this is manifested in the form of a moral hazard problem for theagent–the signal must not only convince the receiver, but provide incentives for the agent10s well.In this paper, the principal is, herself, the receiver, and so the signal is in part designed topersuade her. However, the sender is also conscious of the signal, and his choice of action isshaped by the signal. Accordingly, the optimal signal is chosen not just with persuasion inmind but incentive provision as well. This dual objective is also present in Asriyan, Fuchs,and Green (2017) [2]. In their model,
𝑁 + 1 sellers have an indivisible asset whose valueis private information, and have two periods in which they may trade it. In the portion ofthe paper relevant to this one, they ask how a planner “should disclose trade behavior tomaximize social welfare". As these authors note, persuasion is not the only objective, sincethe information policy “affects the information content of trading, and hence affects tradingitself". Other papers that involve similar trade-offs include Le Treust and Tomala (2017) [22]and Giorgadis and Szentes (2017) [16]. To help deal with some technical issues related to thisidea of a “constrained information design" problem, several authors have written notes, seeDoval and Skreta (2018) [11] and Zhong (2018) [35].Ball (2019) [3] investigates the design of a scoring rule in order to elicit information froma sender who can distort multiple features (what the receiver can observe) about himself.Ball’s paper echoes the trade-off that we consider here, that, “the intermediary must considerhow the scoring rule motivates the sender to distort her features.” Analogously, he finds thatcoarser information can benefit the receiver due to the endogeneity of the information, whichis produced by the sender.Another paper that bears mention is Salamanca (2017) [30]. There the author relaxes thecommitment assumption endemic to the Bayesian persuasion literature and introduces a me-diator, who facilitates communication. Communication is cheap talk, though as in this paperthere is the interplay between the persuasion motive and the obedience/incentive compati-bility requirement for the information designer.The paper closest to this one is Rick (2013) [28], who forwards the idea that mis-communication–what we in this paper refer to as limited transparency–can be useful in communicationgames, which category encompasses costly signaling games (e.g. Spence (1978) [31]), cheaptalk (e.g. Crawford and Sobel (1982) [10]), and games with verifiable messages (e.g. Gross-man (1981) [18]). He assigns Pareto weights to the different sender types and receiver, and11sks how mis-communication can be helpful (welfare improving). It can improve equilib-rium information transmission without raising communication costs and/or it can reducecommunication costs without changing the quality of information transmission.Rick focuses on communication games in which the receiver does not have preferencesover the message sent by the sender (what we term simple signaling games in this paper)and the sender’s payoff is additively separable in his benefit from the sender’s action and hiscost of sending a message. As Rick shows, benefits from mis-communication, if there areany, must come from at least one of two sources: 1. mis-communication can remove someprofitable deviations to unused messages and 2. mis-communication can expand the set ofstrategy profiles that implements a given distribution of posteriors in equilibrium.In contrast to Rick’s paper, we explore the receiver’s problem, which allows for differentresults. In particular, if the receiver has two actions we find an equivalence between the com-mitment problem and the information design problem (for simple signaling games). In short,the focus of this work is not that the receiver may benefit from less than full transparencybut instead how she should choose the degree of transparency optimally.There are other papers in the literature that explore the benefits of noise and look at op-timal information structures in cheap talk settings. Myerson (1991) [26] famously describesa cheap talk game in which messages are sent via carrier pigeon. Remarkably, a somewhatwayward pigeon, one who occasionally becomes lost, improves communication between thetwo players. In a well known paper, Forges (1990) [13] considers mediation in a job-marketexample, in which the signals about the prospective candidate’s type are cheap talk. As inthe other papers in this literature, this introduction of a mediator enlarges the size of the setof equilibrium payoffs.Quite a few other papers–Goltsman, Hörner, Pavlov, and Squintani (2009) [17], Gangulyand Ray (2011) [15], Ivanov (2009) [19], and Blume, Board, and Kawamura (2007) [5]–thatlook at mediation in the context of cheap talk. However, due to the difficulty of the prob-lem, these papers focus on the uniform-quadratic setting from Crawford and Sobel (1982)[10], and Blume et al. restricts mediation further to a specific form of noise: after the senderchooses a message 𝑚 there is an error with some probability 𝜖 , after which the receiver ob-serves message 𝑚 ′ drawn from the uniform distribution on [0, 1] , independent of the chosen12essage 𝑚 . With probability , there is no error and the receiver observes the chosenmessage 𝑚 . They show (in conjunction with Goltsman et al.) that in the uniform quadraticsetting this corresponds to the receiver-optimal information structure.Because the messages are cheap talk, these papers, as does Salamanca, view the mediationproblem as a centralized, mechanism design problem. They use the idea of a CommunicationEquilibrium , as formulated in Myerson (1986) [25] and Forges (1986) [12]. Each sender reportshis type to a mediator, who then sends a (possibly random) recommendation to the receiver.In this paper, here, provided the game is cheap talk and the set of messages is sufficientlylarge, the optimal signal 𝜋 is equivalent to the centralized mediator-driven problem analyzedin these cheap talk papers (we can think of the separating equilibrium as the reporting of thesender’s type). However, if the message set is not sufficiently large, then this equivalence islost. Moreover, if the messages are costly (i.e. the game is not cheap talk), then the problemwe explore is different to the centralized problems of the literature.There is experimental evidence that senders respond to different information structures.Blume, Lai, and Lim (2019) [6] examine a particular class of information structures, random-ized response, and find that randomized response can induce senders to be significantlymore truthful. Note that they also find that the information loss due to the intermediary’sgarbling may outweigh the gain in information due to the increased frequency of truthful-ness. However, in a subsequent paper, Blume, Lai, and Lim (2019) [7] explore mediated cheaptalk experimentally and find that mediation pushes sender types toward separation, whichincreases the receiver’s payoff. Crucially, both papers find that the degree of transparencydoes affect the sender’s behavior.Finally, the notion that noise can improve communication has been mentioned in thebiology literature as well: Lachmann and Bergstrom (1998) [21] examine a specific signalinggame, the “Sir Philip Sidney" game and illustrate through an example that the receiver maybenefit by having some degree of perceptual error. That is, they consider the game with Originally introduced by Warner (1965) [32], Blume et al. (2019) [6] use a simple version of the technique:the sender is asked one of two yes or no questions before an intermediary reveals the answer to the receiver.However, the intermediary may only partially reveal the question to which the answer corresponded, therebyadding noise to the response.
13n exogenous garbling of the sender’s signal and argue that such an environment may bebeneficial for the receiver. In addition, in concurrent work to this one, Whitmeyer (2019) [34]looks at strategic inattention in the Sir Philip Sidney game, a paradigmatic signaling game inbiology. There, attention is restricted to specific information structures, those correspondingto inattention on the part of the receiver; and we find that the receiver always weakly (andoften strictly) benefits from being somewhat inattentive.
The setup is a version of the standard communication game. There are two players: a sender, 𝑆 ; and a receiver, 𝑅 . The sender has private information, his type (or the state) 𝜃 ∈ Θ .He observes his type and chooses a message, 𝑚 , from the set of messages 𝑀 . The receiverobserves 𝑚 , but not 𝜃 , updates her belief about the sender’s type and message using Bayes’law, then chooses a mixture over actions. We assume that sets 𝑀 and Θ are finite. 𝑆 and 𝑅 share a common prior over the state of the world, 𝜇 ∈ Δ(Θ) , where 𝜇 (𝜃) =Pr(Θ = 𝜃) . Each player, 𝑆 and 𝑅 , has state-dependent preferences over the message sent, andthe action taken, which are represented by the continuous utility functions 𝑢 𝑖 , 𝑖 ∈ {𝑆, 𝑅} : 𝑢 𝑖 ∶ 𝑀 × 𝐴 × Θ → ℜ .The timing of the game is as follows. First, 𝑆 observes his private type (or the state) 𝜃 ∈ Θ ,and chooses a message 𝑚 ∈ 𝑀 to send to 𝑅 . 𝑅 observes 𝑚 , updates her belief, and choosesaction 𝑎 ∈ 𝐴 .We extend the utility functions for the players to behavioral strategies. A behavioralstrategy for 𝑆 , 𝜎 (⋅|𝜃) is a probability distribution over 𝑀 . Similarly, a behavioral strategy for 𝑅 , 𝜌(⋅|𝑚) is a probability distribution over 𝐴 . 𝜎 (𝑚|𝜃) is the probability that a type 𝜃 sendersends message 𝑚 .We focus on receiver-optimal Perfect Bayesian Equilibrium, which we define in the stan-dard manner. As noted in the introduction, by equilibrium or PBE, we refer to those particularequilibria. In addition, throughout this paper, we focus primarily on signaling games that fallinto the following class: 14 efinition 2.1. A communication game is Simple if the receiver has preferences over theaction taken, 𝑎 , and the state (or sender’s type), 𝜃 , but not over the message chosen by thesender, 𝑚 . Equivalently, a game is simple provided the receiver’s preferences are representedby the continuous utility function 𝑢 𝑅 ∶ 𝐴 × Θ → ℜ . The principal purpose of this paper is to explore receiver-optimal information structures,alternatively termed the optimal degree of transparency, in the context of signaling games.Suppose that 𝑅 can commit to an information structure in the following sense. There is a com-pact set of signal realizations 𝑋 , where |𝑋 | ≥ |Δ(Θ)| , i.e. the receiver is unconstrained by thesize of this set. A signal 𝜋 is a mapping 𝜋 ∶ 𝑀 → Δ (𝑋 ) , where 𝜋 (𝑥|𝑚) ∶= Pr (𝑋 = 𝑥|𝑀 = 𝑚) .Instead of observing 𝑚 , 𝑅 instead observes 𝑥 , before choosing a behavioral strategy 𝜌 ∶ 𝑋 →Δ(𝐴) so as to maximize her expected utility.The receiver’s value function is 𝑉 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑢 ∑ 𝑒=1 𝜋 (𝑥 𝑒 |𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜌 (𝑎 𝑙 |𝑥 𝑒 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 ) and she solves sup 𝜋,𝜎,𝜌 {𝑉 } such that 𝜌(𝑎 𝑙 |𝑥 𝑒 ) ∈ arg max 𝜌(𝑎 𝑙 |𝑥 𝑒 ) { 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 )𝜋 (𝑥 𝑒 |𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜌 (𝑎 𝑙 |𝑥 𝑒 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 )} ( 𝑂 𝑙 )for all 𝑥 𝑒 ∈ 𝑋 ; and 𝜎 𝑖 ∈ arg max 𝜎 𝑖 { 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑢 ∑ 𝑒=1 𝜋 (𝑥 𝑒 |𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜌 (𝑎 𝑙 |𝑥 𝑒 ) 𝑢 𝑆 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 )} ( 𝐼 𝐶 𝑖 )for all 𝜃 𝑖 ∈ Θ .As we will shortly see, if the game is simple and the receiver only has two actions, themaximization problem can be reduced to a finite collection of linear programming prob-lems. Moreover, we also introduce the following definition and a subsequent “RevelationPrinciple"-like result, which greatly simplifies the receiver’s problem15 efinition 2.2. A signal, 𝜋 , is Direct if 𝑋 = 𝐴 . That is, a direct signal recommends actions.Then, the following proposition establishes that it is without loss of generality to restrictattention to direct signals, which recommend actions to the receiver.
Proposition 2.3.
For any equilibrium triple, (𝜋 , 𝜎 , 𝜌) , that yields a payoff of 𝑣 to the receiver,there is another equilibrium triple, (𝜋 ′ , 𝜎 ′ , 𝜌 ′ ) , that yields the same payoff, 𝑣 , to the receiver;where 𝜋 ′ is a direct signal, 𝜎 = 𝜎 ′ , and 𝜌 ′ (𝑎|𝑎 ′ ) = 1 for 𝑎 ′ = 𝑎 and 𝜌(𝑎|𝑎 ′ ) = 0 for 𝑎 ′ ≠ 𝑎 .Proof. Consider any equilibrium triple (𝜋 , 𝜎 , 𝜌) . Now introduce for each 𝑥 𝑒 and action 𝑎 𝑙 inthe support of 𝜌(⋅|𝑥 𝑒 ) two new mappings:1. ̂𝜋 ∶ 𝑀 → Δ ( ̂𝐴 ′ ) , where ̂𝜋 (𝑎 𝑒𝑙 |𝑚 𝑗 ) ∶= 𝜋 (𝑥 𝑒 |𝑚 𝑗 )𝜌(𝑎 𝑙 |𝑥 𝑒 ) ; and2. 𝜌 ′ , where 𝜌 ′ (𝑎 𝑙 |𝑎 𝑒𝑙 ) = 1 and 𝜌 ′ (𝑎 𝑙 |𝑎 𝑒𝑚 ) = 0 .That is, 𝑎 𝑒𝑙 is the instruction to play 𝑎 𝑙 that induces the same belief as 𝑥 𝑒 . Clearly set ̂𝐴 maybe larger than 𝐴 –it may have multiple “duplicate" recommendations.By construction, for each 𝑎 𝑒𝑙 , action 𝑎 𝑙 is a best response. Moreover, it is easy to seethat both the obedience and IC constraints are satisfied and that the expected value for thereceiver is the same. Finally, introduce garbling 𝑔 ∶ ̂𝐴 → Δ (𝐴) , where 𝑔(𝑎 𝑖 |𝑎 𝑒𝑙 ) = 1 for 𝑖 = 𝑙 and 𝑔(𝑎 𝑖 |𝑎 𝑒𝑙 ) = 0 for 𝑖 ≠ 𝑙 . Define 𝜋 ′ ∶= 𝑔◦ ̂𝜋 . It is easy to see that the IC constraintsremain satisfied (since each message will lead to the same distribution of actions chosen bythe receiver). Moreover, the obedience constraints must be satisfied as well since 𝜋 ′ is lessBlackwell informative than ̂𝜋 : if it were optimal for the receiver to choose an action otherthan 𝑎 𝑙 after observing recommendation 𝑎 𝑙 (recall that it is optimal for the receiver to choose 𝑎 𝑙 after 𝑎 𝑒𝑙 for any 𝑒 ) then the receiver would have a higher payoff under the less informativedistribution, which contradicts Blackwell’s Theorem (Blackwell 1951 [4]). Hence (𝜋 ′ , 𝜎 ′ , 𝜌 ′ ) is also an equilibrium, and the receiver’s expected payoff remains the same. ■ In the remainder of the paper, we restrict attention to direct signals, 𝜋 , and thus thereceiver’s problem can be reduced to sup 𝜋,𝜎 { 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 )} ( ⋆ )16uch that 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 )𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 ) ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 )𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 ′ , 𝜃 𝑖 ) ( 𝑂 𝑙 )for all 𝑎 𝑙 , 𝑎 𝑙 ′ , 𝑙, 𝑙 ′ = 1, … , 𝑛 ; and 𝜎 𝑖 ∈ arg max 𝜎 𝑖 { 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑆 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 )} ( 𝐼 𝐶 𝑖 )for all 𝜃 𝑖 , 𝑖 = 1, … , 𝑛 .Importantly, as the following result states, it is not without loss of generality to restrictthe sender types to pure strategies. Proposition 2.4.
For three or more (receiver) actions, there is not always a solution to thereceiver’s optimal transparency problem in which each type of sender chooses a pure strategy.Proof.
Proof is via counterexample. We revisit the game from Lemma 4.3 in Whitmeyer (2019)[33]. There are three types of sender; 𝜃 𝐿 , 𝜃 𝑀 , and 𝜃 𝐻 . A belief as a triple (𝜇 𝐿 , 𝜇 𝑀 , 𝜇 𝐻 ) , and theprior is (1/4, 1/4, 1/2) .This game is cheap talk with transparent motives: each sender type gets utility if thereceiver chooses 𝑙 or 𝑠 , and if the receiver chooses 𝑥 . The receiver’s preferences are givenas follows: Action 𝜃 𝐿 𝜃 𝑀 𝜃 𝐻 𝑙 0 1 2𝑠 13/24 13/24 1𝑥 1 0 1 Suppose that there are just two messages, 𝑔 and 𝑏 . As we ascertain in Whitmeyer (2019)[33], the receiver optimal equilibrium with full transparency is one in which 𝜃 𝐻 and 𝜃 𝐿 choosedifferent messages and 𝜃 𝑀 mixes between those messages ( 𝑔 and 𝑏 ). The receiver’s payoff is . On the other hand, if we look for the receiver optimal equilibrium in which each typechooses a pure strategy under any information structure, it is easy (though tedious) to verifythat the maximum payoff the receiver can obtain is the pooling payoff, . ■ separate . This is extremely useful since it ensures that the receiverneed merely solve a linear program. Proposition 2.5.
In a cheap talk game, let the number of messages, 𝑡 , be weakly greater thanthe number of states, 𝑛 . Then, if the receiver can achieve some payoff, 𝑣 , under some informa-tion structure and equilibrium, ( 𝜋 , 𝜎 ), then he can achieve that payoff under some informationstructure and equilibrium ̂𝜋 , ̂𝜎 where ̂𝜎 corresponds to a fully separating equilibrium.Proof. Consider some arbitrary type 𝜃 𝑖 . He must have 𝑊 ∶= 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑎 ) 𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) = 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑏 ) 𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) ( )for all 𝑚 𝑎 , 𝑚 𝑏 in the support of his mixed strategy; and 𝑊 = 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑎 ) 𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) ≥ 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑐 ) 𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) ( )for all 𝑚 𝑎 in the support of his mixed strategy, 𝜎 𝑖 , and 𝑚 𝑐 not in the support of his mixedstrategy. Next, define ̂𝜎 𝑖 (𝑚 𝑗 ) = 1 if 𝑗 = 𝑖 and otherwise (each type is now separating), andcall this vector of strategies ̂𝜎 . Moreover, define ̂𝜋 by ̂𝜋 (𝑎 𝑙 |𝑚 𝑖 ) ∶= 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 )𝜋 (𝑎 𝑙 |𝑚 𝑗 ) It is easy to verify via direct substitution that the receiver’s payoff is the same and that theobedience constraints are satisfied. It remains to verify that the IC constraints for the sendertypes are satisfied. Observe that type 𝜃 𝑖 , in choosing any 𝑚 𝑑 obtains 𝑘 ∑ 𝑙=1 ̂𝜋 (𝑎 𝑙 |𝑚 𝑑 ) 𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) = 𝑘 ∑ 𝑙=1 𝑡 ∑ 𝑗=1 𝜎 𝑑 (𝑚 𝑗 )𝜋 (𝑎 𝑙 |𝑚 𝑗 )𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 )= 𝑡 ∑ 𝑗=1 𝜎 𝑑 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 )𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) 𝑑 = 𝑖 then this expression is 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 )𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) = 𝑊 by the fact that 𝜎 𝑖 was his previous equilibrium strategy and hence is nonzero only for mes-sages which yield an expected payoff of 𝑊 (see equation ). If 𝑑 = 𝑟 ≠ 𝑖 then this expressionis 𝑡 ∑ 𝑗=1 𝜎 𝑟 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 )𝑢 𝑆 (𝑎 𝑙 , 𝜃 𝑖 ) ≤ 𝑊 since the highest expected payoff any message can yield is 𝑊 and since 𝜎 𝑟 may place nonzeroprobability on messages that yield the receiver an expected payoff that is less than 𝑊 (seeinequality ). ■ We end the section with the following result, which pertains to belief-based equilibriumrefinements.
Lemma 2.6.
Suppose (𝜎 , 𝜌) is an equilibrium in the game with full transparency that is elim-inated by some belief-based refinement. Then, there exists a signal 𝜋 such that (𝜎 , 𝜋 ) is anequilibrium with 𝜋 = 𝜌 that is robust to any belief-based refinement provided 𝜌 has full supportover 𝐴 ; i.e. provided every action is played with positive probability.Proof. Under full transparency suppose that there is some equilibrium under which message 𝑚 ′ is not played with positive probability on the equilibrium path. Thus, there must be somebehavioral strategy 𝜌(𝑎|⋅) sustained by some belief 𝜇 such that no type can deviate to 𝑚 ′ profitably. Suppose that there there is a belief-based equilibrium refinement such that thereexists no 𝜇 that sustains such a 𝜌 .Introduce signal 𝜋 and suppose that every action 𝑎 ′ needed to head off a deviation isplayed with positive probability on the equilibrium path. Then, simply choose 𝜋 such thatfollowing a deviation to an off path 𝑚 ′ , the signal 𝜋 (𝑎|𝑚) is such that the resulting distribu-tions over actions is just the 𝜌(𝑎|𝑚 ′ ) needed to sustain the equilibrium.If 𝜌 has full support over 𝐴 , the supposition in the previous paragraph obviously holds(we’ve actually proved a slightly stronger result). ■ Opacity = Commitment
In this section, we establish one of the two main results of the paper, that in simple com-munication games in which the receiver has two actions, information design is as good ascommitment.We begin with the following proposition, which states that any payoff that the receivercan obtain through information design, she can achieve with the ability to commit to mix-tures over actions as a function of the sender’s message.
Proposition 3.1.
If a receiver can achieve a payoff at equilibrium under a particular informa-tion structure, then the receiver can achieve that payoff at equilibrium in a game in which exante she commits to a distribution of actions conditioned on the sender’s message 𝑚 .Proof. Take any garbling, 𝜋 , of the message 𝑚 , and the optimal response by the receiver tothis garbling. Every message, 𝑚 , will lead to a distribution of actions by the receiver. Thesame payoff can be achieved by the receiver committing to the same distribution of actionsconditioned on each message, 𝑚 . ■ Another way to deduce this result is to note that the receiver’s choice of commitmentstrategy is a choice of mapping 𝜋 ∶ 𝑀 → Δ (𝐴) , and in her commitment problem she chooses 𝜋 and 𝜎 to maximize 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑅 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 ) ( ⋆ )such that 𝜎 𝑖 ∈ arg max 𝜎 𝑖 { 𝑡 ∑ 𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑗 ) 𝑢 𝑆 (𝑚 𝑗 , 𝑎 𝑙 , 𝜃 𝑖 )} ( 𝐼 𝐶 𝑖 )for all 𝜃 𝑖 , 𝑖 = 1, … , 𝑛 . That is, her commitment problem is her information design problemwith the obedience constraints relaxed.As the next result illustrates, the commitment problem is much easier to solve than theinformation design problem, since it reduces to a finite collection of linear programs. Lemma 3.2.
There is a receiver-optimal equilibrium with commitment in which each type ofsender chooses a pure strategy. roof. As noted above, in the receiver’s commitment problem, she maximizes ⋆ such thatconstraints 𝐼 𝐶 𝑖 are satisfied for all types 𝜃 𝑖 . Suppose that at the optimum there is a typemixing, say 𝜃 . He must be indifferent over each message in the support of his mixed strategy:suppose that two such messages are 𝑚 and 𝑚 . Then, since the receiver’s objective is linearin 𝜎 𝑖 (𝑚 𝑗 ) we have 𝜕𝑉𝜕𝜎 (𝑚 ) = 𝜇 (𝜃 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 ) 𝑢 𝑅 (𝑚 , 𝑎 𝑙 , 𝜃 ) =∶ 𝜉𝜕𝑉𝜕𝜎 (𝑚 ) = 𝜇 (𝜃 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 ) 𝑢 𝑅 (𝑚 , 𝑎 𝑙 , 𝜃 ) =∶ 𝜓 where 𝜉 and 𝜓 are constants. It is easy to see that we must have 𝜉 = 𝜓 since by construction 𝑉 is being maximized. Hence, we may set either 𝜎 (𝑚 ) or 𝜎 (𝑚 ) to . We may do thesame with each other message in support of the mixed strategy until finally we arrive at onemessage chosen with probability and the rest with probability . The receiver’s payoff isunchanged and the sender is still unwilling to deviate. We may do likewise with each othertype 𝜃 𝑖≠1 , until each type is choosing a pure strategy. ■ One consequence of this result is that there is a commitment solution for the receiver inwhich at most 𝑡 = |𝑀 | messages are used. Moreover, for a fixed vector of pure strategies,the receiver’s commitment problem is a linear program. Thus, the receiver’s commitmentproblem is merely a finite collection of linear programming problems. Next, we use thislemma to establish one of our main results.
Theorem 3.3 (Opacity Equals Commitment) . Let |𝐴| = 2 . Then, in any simple signaling game,the optimal transparency solution coincides with the commitment solution.Proof.
If the same action is optimal in every state then the proof is trivial. Hence supposethat each action is strictly optimal in at least one state. Let the number of actions, 𝑘 = 2 , andfor simplicity write 𝑣 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) , 𝑤 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) , 𝜋 𝑗 ∶= 𝜋 (𝑎 |𝑚 𝑗 ) , and 𝑗 ∶= 𝜋 (𝑎 |𝑚 𝑗 ) .Next, observe that the payoff upon observing nothing for the receiver (i.e. should shesimply choose her optimal action based on the prior) is ̂𝑉 = max { 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 , 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 } ∑ 𝑡𝑗=1 𝜎 𝑖 (𝑚 𝑗 )𝜋 𝑗 to just 𝜋 𝑖 and ∑ 𝑡𝑗=1 𝜎 𝑖 (𝑚 𝑗 ) (1 − 𝜋 𝑗 ) =1 − 𝜋 𝑖 . Thus, in the optimal commitment solution, the receiver’s payoff may be rewritten as 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 𝑣 𝑖 + (1 − 𝜋 𝑖 ) 𝑤 𝑖 ] and the obedience constraints that must be satisfied are 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 𝜋 𝑖 ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 𝜋 𝑖 ( 𝑂1 ) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 (1 − 𝜋 𝑖 ) ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 (1 − 𝜋 𝑖 ) ( 𝑂2 )Suppose first that the receiver chooses each action with positive probability on path (andso Bayes’ law in the obedience constraints is defined). Moreover, without loss of generality,let ̂𝑉 = ∑ 𝑛𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 , i.e. ∑ 𝑛𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 ≥ ∑ 𝑛𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 : should she observe nothing, thereceiver weakly prefers action 𝑎 . Consequently, the first obedience constraint must be slackand we must also have 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 𝑣 𝑖 + (1 − 𝜋 𝑖 ) 𝑤 𝑖 ] ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 or 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 (1 − 𝜋 𝑖 ) ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 (1 − 𝜋 𝑖 ) and thus the obedience constraints are satisfied.Second, suppose that the receiver does not choose each action with positive probabilityon path, and without loss of generality let ∑ 𝑛𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 ≥ ∑ 𝑛𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 . Then, it is clearthat the receiver must be told to play action and so Bayes’ law used to derive the secondobedience constraint must not be defined. Since there is at least one state in which thesecond action is optimal, we can always stipulate an off path belief such that constraint 𝑂2 is satisfied. Finally, by construction the first obedience constraint is satisfied. ■ This result allows us to solve for the optimal degree of transparency in any action simplegame with ease. We need simply solve the receiver’s commitment problem, which consists22f at most two signal realizations. Moreover, thanks to Lemma 3.2, this problem can itself bereduced–it suffices to merely consider the cases in which the different types of sender choosepure strategies. Here we solve for the optimal information structure given in the introductory example, soas to illustrate the usefulness of the above results. Armed with Theorem 3.3, our problemis simple: we need only solve the receiver’s commitment problem. Since the receiver hasjust two actions, we can describe her commitment strategy with just two variables, 𝑝 , and 𝑞 ,where 𝑝 ∶= 𝜋 (𝐶|𝐼 ), and 𝑞 ∶= 𝜋 (𝐶|𝑁 ) Namely, the COO commits to close the branch with probability 𝑝 following message 𝐼 , andwith probability 𝑞 following message 𝑁 .Next, from Lemma 3.2 we need only search for the optimum for the cases in which thesender in each state chooses a pure strategy. First, suppose that the sender in the good statechooses 𝐼 and that the sender in the bad state chooses 𝑁 .The receiver’s objective function is max 𝑝,𝑞 {𝜇 [1 − 𝑝] + (1 − 𝜇 )𝑞} and the two incentive compatibility constraints for the sender (in each state) are: 𝑝 + 3(1 − 𝑝) ≥ 2(1 − 𝑞) ( 𝑆1 ) 𝑞 + 3(1 − 𝑞) ≥ 2(1 − 𝑝) ( 𝑆2 )It is clear that only the second constraint binds and so the optimal 𝑝 and 𝑞 are 𝑝 = 1/2 and 𝑞 = 1 for 𝜇 ≤ 1/2 and 𝑝 = 0 and 𝑞 = 1/2 for 𝜇 ≥ 1/2 . These yield payoffs to the receiver of /2 and (1 + 𝜇 )/2 , respectively.This commitment equilibrium is obviously better for the receiver than a commitmentequilibrium in which the sender chooses the same message regardless of the state (such an23quilibrium begets the pooling payoff). Likewise, it is easy to verify that there is no com-mitment equilibrium in which the senders separate in the opposite manner. Thus, this com-mitment equilibrium is optimal and by Theorem 3.3, opacity equals commitment, and so thereceiver’s commitment strategy corresponds to the optimal information structure.Explicitly, the optimal information signal consists of two signal realizations, 𝑋 = {𝑐, 𝑜} (recommendations “close" and “keep open," respectively), and mapping 𝜋 ∶ 𝑀 → Δ(𝑋 ) ,where 𝜋 (𝑐|𝑁 ) = 1, 𝜋 (𝑐|𝐼 ) = 12𝜋 (𝑜|𝑁 ) = 0, 𝜋 (𝑜|𝐼 ) = 12 for 𝜇 ≤ 1/2 , and 𝜋 (𝑐|𝑁 ) = 12 , 𝜋 (𝑐|𝐼 ) = 0𝜋 (𝑜|𝑁 ) = 12 , 𝜋 (𝑜|𝐼 ) = 1 for 𝜇 ≥ 1/2 . Naturally, there are many games that fall under the umbrella of communication games thatnevertheless cannot be termed simple. That is, while information transmission is a key com-ponent of such games, the receiver has preferences over the messages themselves as well.Here we provide a generalization of Theorem 3.3.There are 𝑛 states and 𝑡 messages, indexed by 𝑖 and 𝑗 , respectively. The receiver has twoactions, 𝑎 and 𝑎 , and for convenience we write the receiver’s utilities from the two actionsand 𝑡 messages as 𝑣 𝑖𝑗 ∶= 𝑢(𝑎 , 𝜃 𝑖 , 𝑚 𝑗 ) and 𝑤 𝑖𝑗 ∶= 𝑢(𝑎 , 𝜃 𝑖 , 𝑚 𝑗 ) . We introduce the followingcondition. Condition 3.4.
For any state 𝜃 𝑖 , 𝑣 𝑖1 ≥ max 𝑗 {𝑣 𝑖𝑗 } and 𝑤 𝑖1 ≥ max 𝑗 {𝑤 𝑖𝑗 } .This condition has a straightforward meaning. Simply, the receiver has a “favorite" mes-sage, which, all else equal, she prefers that the sender choose. Such a condition is natural24n many contexts. One such instance is in the Sir Philip Sidney game, the topic of Whit-meyer (2019) [34]. There, the relatedness parameter, 𝑘 , ensures that the game is not simple.However, Condition 3.4 is satisfied: regardless of the state the receiver would prefer that thesender remain silent.Indeed, in many settings, a stronger condition holds; that 𝑣 𝑖1 ≥ 𝑣 𝑖2 ≥ ⋯ ≥ 𝑣 𝑖𝑛 and 𝑤 𝑖1 ≥𝑤 𝑖2 ≥ ⋯ ≥ 𝑤 𝑖𝑛 . This corresponds to a scenario in which, all else equal, the receiver has thesame preferences over the message chosen by the sender. The political example includedlater on in this paper satisfies this strong condition (and thus also Condition 3.4). There,irrespective of whether the incumbent is good or bad, the receiver prefers that he choose thelowest amount of policy frictions.Yet another instance of a game where this strong condition is natural is a version of theSpence (1978) [31] education model in which education enhances productivity. There, wemight suppose that regardless of the state, the employer would prefer that the worker havemore education. Of course, all simple signaling games satisfy the strong condition. Proposition 3.5 (Opacity Equals Commitment II) . Let Condition 3.4 hold. Then, if absentstrategic concerns, each sender type prefers message 𝑚 , the optimal transparency solution co-incides with the commitment solution.Proof. The proof is analogous to that for Theorem 3.3 and so is left to Appendix A.1. ■ Although Proposition 3.5 allows us to extend Theorem 3.3 beyond simple signaling games,it has a number of conditions as prerequisites. These assumptions are not innocuous, andwithout them, the result may fail. It is easy to see that in general for non-simple games, thereceiver may benefit if she had the ability to commit to an incredible threat. Indeed, considerthe following modification of the Spence (1978) [31] signaling model. The receiver is a firmwith a binary decision: whether to hire an applicant. The applicant is the sender, who canchoose his level of education, but education is costly and all else equal less education is betterfor each type.Suppose that the game is non-simple: education is productivity-enhancing; and, regard-less of the applicant’s type, a more educated worker is better for the firm. Finally, supposethat no matter the type or the level of education, the firm would prefer to hire the worker.25hus, with the ability to commit to actions, the firm might want to commit to only hireworkers with a sufficiently high level of education, yet such a protocol would not be sequen-tially rational: no signal would persuade the firm to follow through on such a threat. Hence,commitment is strictly better than opacity.In addition, Theorem 3.3 is robust. Namely, if we perturb the receiver’s preferences sothat she has slight preferences over the message chosen by the sender, the opacity equalscommitment equivalence remains, provided in the unperturbed game, the receiver can obtaina payoff that is strictly higher than the pooling payoff (the payoff should there be no strategicinteraction in which the receiver simply took the action optimal under the prior). To wit,
Proposition 3.6.
Consider any two action, simple, signaling game in which each action isstrictly optimal in at least one state, and in which the receiver can obtain a payoff that is strictlyhigher than the pooling payoff. Then for any non-simple perturbation of this game in whichthe receiver’s payoffs are sufficiently close to her payoffs in the unperturbed game, opacity is asgood as commitment.Proof.
We prove the following formal result. Consider any two action, simple, signaling gamewith payoffs denoted by 𝑣 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) and 𝑤 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) ; and let there exist some 𝜃 𝑖 suchthat 𝑣 𝑖 > 𝑤 𝑖 and some 𝜃 𝑘 such that 𝑤 𝑘 > 𝑣 𝑘 . Moreover, let 𝑉 > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 , and 𝑉 > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 In any non-simple perturbation of this game, with 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑣 𝑖 + 𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) and 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑤 𝑖 + 𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) , there exists a 𝜅 > 0 such that if |𝜔(𝑎 𝑙 , 𝑚 𝑗 , 𝜃 𝑖 )| ≤ 𝜅 for all 𝑎 𝑙 ∈ 𝐴 ,for all 𝑚 𝑗 ∈ 𝑀 and for all 𝜃 𝑖 ∈ Θ , the solution to the receiver’s commitment problem satisfiesher obedience constraints. Viz, opacity equals commitment.The proof may be found in Appendix A.2. ■ Alas, if the receiver has three or more actions then Theorem 3.3 may fail to hold. Considerthe modified game depicted in Figure 4. This is a version of the Beer-Quiche game of Choand Kreps (1987) [9] with the addition of action ℎ for the receiver. As this game illustrates,our previous result (Theorem 3.3) cannot be extended to the case in which the receiver hasthree actions. 26 𝑊 (4, 1/2) ℎ(0, 1) 𝑓(4, 0) 𝑛𝑓 𝐵 (6, 0)𝑛𝑓 (1, 1)𝑓 (6, 1/2)ℎ𝑄1 − 𝜇 𝜃 𝑆 (1, 1/2) ℎ(1, 0) 𝑓(6, 1) 𝑛𝑓 𝐵 (4, 1)𝑛𝑓 (0, 0)𝑓 (0, 1/2)ℎ𝑄𝜇 𝑅 𝑅
Figure 4: A Action GameRecall the Beer-Quiche game: there are two types, wimp ( 𝜃 𝑊 ) and strong ( 𝜃 𝑆 ). In theoriginal formulation, there are just two actions for the receiver: it is a basic match-the-stategame in which the receiver obtains a payoff of from fighting the wimp or not fighting thestrong type, and otherwise. Now, the receiver gets a payoff of from choosing ℎ no matterthe state. As for the sender, in the original setting, wimps prefer Quiche to Beer and vice-versa for the strong types. In addition, both types prefer not being fought to being fought,and crucially, they would rather have their least favorite meal and not be fought than havetheir favorite meal and be fought. These preferences continue to hold in the modification;and there, ℎ yields the same payoff to the wimp as 𝑛𝑓 , and ℎ yields the same payoff to thestrong type as 𝑓 .Throughout this example, we assume that 𝜇 > 1/2 . For completeness, we leave detailedanalysis to Appendix A.3. There, we derive that in the commitment solution the sendersseparate: type 𝜃 𝑊 chooses 𝑄 and type 𝜃 𝑆 chooses 𝐵 . After 𝐵 , the receiver chooses 𝑛𝑓 withprobability ; and after 𝑄 , the receiver chooses 𝑓 with probability and ℎ with probability .However, in the transparency problem, it is immediately clear that given the correspond-ing direct signal realization, “play ℎ ”, the obedience constraint for the receiver will be satisfiedonly if her belief given the signal is that the sender is the wimp with probability . Instead,27he believes that the sender is the wimp with probability , and so the optimal commitmentstrategy is not sequentially rational. Ex Ante
Learning
We begin this section by revisiting the introductory example involving intra-firm commu-nication. Suppose that prior to the communication game the COO has a chance to acquirepublic information. She may for instance commission an independent report, or even visitthe local branch herself. Is it true that, given this opportunity, the COO would always chooseto take it? That is, would any ex ante information acquisition benefit the COO? Observe thatan equivalent question is whether the COO’s payoff as a function of the (prior) belief in thesignaling game is convex in the prior.The companion paper to this one, Whitmeyer (2019) [33] looks at a general version ofthis question when the receiver is restricted to full transparency. As we discover there, fortwo state, two action simple communication games like the intra-firm example, the answeris yes. But what if the COO may choose the optimal degree of transparency in the ensuingcommunication game?We denote the receiver’s payoff with optimal transparency as a function of the prior
𝑉 = 𝑉 (𝜇) , and denote the receiver’s payoff with full transparency as 𝑉 𝑇 = 𝑉 𝑇 (𝜇) .It is straightforward to verify that 𝑉 and 𝑉 𝑇 are 𝑉 (𝜇) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩1 − 𝜇2 𝜇 ≤ 𝜇 ≥ , and 𝑉 𝑇 (𝜇) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩1 − 𝜇 𝜇 ≤ 𝜇 𝜇 ≥ These two functions are depicted in Figure 5, and are clearly convex. Hence, regardless ofwhether the COO can choose the optimal signal, ex ante learning is always (at least weakly)beneficial. Note that until now, we have referred to the prior belief as 𝜇 . Henceforth, 𝜇 will denote the prior belief before ex ante learning, and we denote the prior belief in the communication game by 𝜇 (which is the posteriorbelief resulting from the initial fact finding). ex ante information acquisition is always good, providedthe receiver can choose the information structure in the ensuing game. If she cannot, thenTheorem 4.1. in Whitmeyer (2019) [33] states that information up front may actually hurtthe receiver. That is, there is information that she would refuse, even if it was free.Formally, we model ex ante information acquisition or fact-finding as follows. Fix a simplecommunication game, and suppose that prior to participating the game, the receiver mayacquire information, which is public. That is, initially the receiver and the sender sharesome prior, 𝜇 , and there is some finite (or at least compact) set of signal realizations 𝑌 anda signal or Blackwell experiment, mapping 𝜁 ∶ Θ → Δ(𝑌 ) whose realization is public.This experiment leads to a distribution over posteriors, where the posterior following signalrealization 𝑦 is 𝜇 𝑦 . Following the realization of the experiment, the sender and receiver thentake part in the signaling game, where the common prior for the game is 𝜇 𝑦 . Call 𝜁 the InitialExperiment. Example 4.1.
Next, we revisit the example from Lemma 4.2 in Whitmeyer (2019) [33] toshow how information design ensures a positive value of information to the receiver in agame where information may have a negative value with full transparency. There are fourstates,
Θ = {𝜃 , 𝜃 , 𝜃 , 𝜃 } , and a belief is a quadruple (𝜇 , 𝜇 , 𝜇 , 𝜇 ) , where 𝜇 𝑖 ∶= Pr (Θ = 𝜃 𝑖 ) forall 𝑖 = 1, 2, 3, 4 and 𝜇 + 𝜇 + 𝜇 + 𝜇 = 1 .The belief can be fully described with just three variables; hence, depicting the receiver’spayoff as a function of the belief requires four dimensions. The current medium of thispaper renders this impossible, so instead we restrict attention to a family of experimentsthat involve learning on just one dimension. That is, we fix 𝜇 = 1/3 and 𝜇 = 1/8 , andconsider only the receiver’s payoff as a function of her (prior) belief about states 𝜃 and 𝜃 .Learning is on just one dimension, and so (abusing notation) we rewrite the receiver’s belief 𝜇 as 𝜇 and 𝜇 as , where 𝜇 ∈ [0, 13/24] .In states 𝜃 and 𝜃 , action 𝑎 is the correct action for the receiver; and in states 𝜃 and 𝜃 ,action 𝑎 is correct: 30ction 𝜃 𝜃 𝜃 𝜃 𝑎 Likewise, the sender’s state (type)-dependent payoffs from message, action pairs aregiven as follows:type 𝜃 𝜃 𝜃 𝜃 message 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑚 𝑎 Note that types 𝜃 and 𝜃 have messages that are strictly dominant ( 𝑚 and 𝑚 , respec-tively), and that 𝜃 has a message that is strictly dominated ( 𝑚 ).In Figure 6 we depict the receiver’s equilibrium payoff as a function of 𝜇 with both fulltransparency ( 𝑉 𝑇 ) and optimal transparency ( 𝑉 ). Explicitly, those functions are 𝑉 (𝜇) = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ − 2𝜇 𝜇 ≤ ≤ 𝜇 ≤ + 𝜇 ≤ 𝜇 ≤ , and 𝑉 𝑇 (𝜇) = ⎧⎪⎪⎪⎨⎪⎪⎪⎩ − 2𝜇 𝜇 ≤ + 𝜇 < 𝜇 ≤ where 𝑉 𝑇 was derived in Lemma 4.2 of Whitmeyer (2019) [33]. By Theorem 3.3, to obtain 𝑉 we need only solve the commitment problem. Thus, its derivation reduces to a simple(though tedious) linear program, which is omitted.As Figure 6 illustrates, the issue engendered by ex ante learning–that the resulting beliefmay beget a strictly worse equilibrium in the communication game–is ameliorated if thereceiver can choose the optimal information structure. Thus, in the example, the receiver’spayoff with optimal transparency is convex. There, information design ensures that ex ante information acquisition is always helpful.This result is general, and holds for all simple games with two actions. First, we establishthat commitment guarantees a positive value of information in any communication game.31igure 6: Optimal Transparency and Full Transparency Payoffs, Non-convex Counterexam-ple 32 emma 4.2. In the commitment problem, the receiver’s payoff under the optimal commitmentstrategy is convex in the prior 𝜇 .Proof. The proof is left to Appendix B.1. ■ Second, Theorem 3.3 and Lemma 4.2 combine to yield
Theorem 4.3.
In simple games, if the receiver has two actions, then the receiver’s payoff withoptimal transparency is convex in the prior 𝜇 . This result is striking: in simple binary action games, optimal transparency guaranteesthat the value of information is always positive, even though with full transparency thevalue of information may be negative. Unfortunately, as the next result illustrates, withthree or more states and actions ex ante information acquisition is not always beneficial forthe receiver, even with optimal transparency.
Proposition 4.4.
If there are at least three states, three actions and two messages then thereceiver’s payoff is not generally convex in the prior.Proof.
Proof is via counterexample. We revisit the game from Proposition 2.4 from this paper,which itself was taken from Lemma 4.3 in Whitmeyer (2019) [33].We endow the sender types with access to three messages ( 𝑔 , 𝑚 , and 𝑏 ), and remindourselves that this game is cheap talk with transparent motives, where the sender gets from 𝑙 or 𝑠 and from 𝑥 , no matter his type or message choice. The prior is 𝜇 and learning consistsof a binary initial experiment that results in the beliefs 𝜇 and 𝜇 , each with probability .Explicitly, 𝜇 ∶= ( 14 , 14 , 12 ) , 𝜇 ∶= ( 112 , 14 , 23 ) , and 𝜇 ∶= ( 512 , 14 , 13 ) By Proposition 2.5, since there are three messages it is without loss to restrict attention toequilibria in which the sender types separate in the optimal transparency solution. We solvethe resulting linear program and obtain that the receiver’s maximal payoffs are and at beliefs 𝜇 and 𝜇 , respectively; and at belief 𝜇 . Comparing the receiver’spayoff at 𝜇 to her expected payoff resulting from the initial experiment 𝜁 – 𝜃 𝐿 separates and chooses 𝑚 , and 𝜃 𝑀 and 𝜃 𝐻 pool on 𝑔 . The receiver commits to choosing 𝑠 after 𝑚 , 𝑙 after 𝑔 , and 𝑥 after 𝑏 , and her payoff is at 𝜇 , which equals her expected payoffwith learning,
12 × 469288 + 12 × 329288
Thus, as we learned from Lemma 4.2, commitment ensures that information cannot hurt thereceiver.It’s clear that this commitment solution is not sequentially rational: after being instructedto choose 𝑠 , the receiver strictly prefers to choose 𝑥 since she knows the sender is 𝜃 𝐿 . Hence,we have stumbled upon another illustration that when the number of actions is three orgreater, opacity ≠ commitment.Finally, Figure 7 depicts the receiver’s posteriors resulting from the receiver-optimal equi-libria at each belief, both when the receiver can choose the optimal degree of transparency (inblack) and when the receiver has commitment power (in green). Each 𝑥 denotes a posteriordistribution. ■ Here, we illustrate the earlier results through three examples. In the first, we explore thedecision of an investor choosing whether to invest in a firm with uncertain potential. In thesecond, we analyze a regime change problem, where a populace must choose whether toreplace a possibly bad incumbent. In the third, we explore the problem of a hiring school atthe American Economic Association academic job market. All three examples suggest dif-ferent interpretations of the signal 𝜋 . In the investment example, 𝜋 could be disclosure rulesand regulations, or due diligence insisted upon by the investor. Conversely, in the politicalexample, 𝜋 could be the reporting actions of the free press. In the job market example, 𝜋 could be an email filter. 34igure 7: Proposition 4.4 Optimal Posteriors35 .1 Benchmarks and Disclosure Rules The receiver is a prospective investor who is choosing whether to purchase a firm, whoseCEO is the sender. The CEO has private information, the viability of the firm: with probabil-ity 𝛾 the firm’s future cash flow will be high, and with probability the firm’s future cashflow will be low. A high cash flow delivers the receiver an income stream of in perpetuity,and a low cash flow delivers the receiver a perpetual stream of .Hence, the sender’s type is the probability 𝛾 that the project is viable. Suppose that thereare just three possible types, Γ ∶= {3/10, 3/5, 9/10} , and that both sender and receiver sharea common prior that each type is realized with probability . The receiver discounts thefuture by 𝛿 = 9/10 .The posted price for the project is , and if the receiver buys the company with viability 𝛾 her expected payoff is 𝛾1 − 𝛿 − 7 Hence the receiver will buy only if
𝔼[𝛾 ]1 − 𝛿 − 7 ≥ 0 or 𝔼[𝛾 ] ≥ 710
The CEO receives a lump sum payoff of in the event that the company is bought, and otherwise.There is a signaling component to this interaction: prior to the purchasing decision of thereceiver, the CEO must choose how much of a budget of to allocate between a risky ventureand a safe venture. The safe venture repays the amount invested of 𝑠 with probability , andthe risky venture repays double the amount invested 𝑟 with probability 𝛾 and repays withprobability . Thus, the short term profitability of the risky venture is perfectly correlatedwith the likelihood that the firm will have a high future cash flow.With regard to the short-run risky venture, the CEO is risk averse and his utility over histerminal wealth is 𝑢(𝑤) = √𝑤 . Overall, given an investment of 𝑟 in the risky venture and36 − 𝑟 in the safe venture, the CEO’s utility from the scenario is 𝑢 𝑆 = 𝛾 √1 − 𝑟 + 2𝑟 + (1 − 𝛾 )√1 − 𝑟 + 35 if the receiver buys the firm and 𝑢 𝑆 = 𝛾 √1 − 𝑟 + 2𝑟 + (1 − 𝛾 )√1 − 𝑟 if the receiver does not buy the firm.Here is a synopsis of the scenario’s timing: first, the state 𝛾 is drawn uniformly from Γ and revealed to the sender; second, the sender chooses a short term investment level 𝑟 ,which is observed by the receiver; third, the receiver decides whether to purchase the firm,and then payoffs are realized.Absent strategic concerns, the CEO’s short run investment strategy is simple. If 𝛾 = 3/10 ,he chooses 𝑟 = 0 ; if 𝛾 = 3/5 , he chooses 𝑟 = 5/13 ; and if 𝛾 = 9/10 , he chooses 𝑟 = 40/41 .However, such separation does not beget an equilibrium in the game with full transparency.Both types and would prefer to deviate and mimic type .Indeed, it is easy to see that there can be no equilibrium in this game with full trans-parency in which there are messages chosen after which the receiver strictly prefers dif-ferent actions. For that to be the case, at least one of type or must have some sup-port of his mixed strategy on a message that is followed by the decision to not buy. How-ever, then that type would deviate profitably to the message that is followed by the decisionto buy. Consequently, the best payoff the receiver can obtain is the pooling payoff. Since 𝔼[𝛾 ] = 3/5 < 7/10 the receiver’s payoff is .Now consider the commitment problem in which the receiver can choose a mapping 𝜋 ∶ 𝑀 → Δ {𝑏𝑢𝑦, 𝑛𝑜𝑡} . We obtain that the optimal equilibrium is one in which each type ofsender separates: type chooses , type chooses , and type chooses . Onesignal that engenders such an equilibrium is 𝜋 (𝑏𝑢𝑦|||| 4041 ) = 53 − 173√41 ≈ .78, 𝜋 (𝑏𝑢𝑦|1) = 53 − √2 + 4√41 ≈ .88, and 𝜋 (𝑏𝑢𝑦|𝑟 ) = 0, ∀𝑟 ≠ 4041 , 1 Recall that we refer to the sender’s action as a message–here the investment level, 𝑟 , is his message. .03 ≈ 5 − 6√2 + √4190 > 0 For a detailed derivation of the solution see Appendix C.1. From Theorem 3.3, opacity equalscommitment, and hence this is the optimal information structure.Note that there are several curious properties of the optimal signal and equilibrium. First,both types and are forced to over-invest in the signaling stage, in order to negatea profitable deviation from the lowest type (which is the type that the prospective buyerreally does not want to purchase). Second, the information is coarsened by the signal: nowthe signal occasionally provides both false positives and false negatives, so as to optimallyincentive the low types.We can interpret 𝜋 literally as a combination of a benchmark and a disclosure rule (or anaccounting rule). The investor can design a protocol so that if the required investment level(s)are not met, she will observe a signal realization instructing her to refrain from buying. Evenif the benchmark (the required investment levels) are met, then the accounting rule is suchthat she is still occasionally instructed to not buy.One way this could be implemented, for instance, is via an online form that initially asksthe CEO (or his team of accountants) explicitly whether the firm has either a) invested or , or b) anything else (where truthfulness could be legally enforced). Then, provided ananswer of a) to the first question the form would ask a follow-up question percent of thetime asking the sender to state whether he invested or . There, if was answered,the firm would be flagged as 𝑏𝑎𝑑 and if was answered, the firm would be rated 𝑒𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡 .The remaining percent of the time, no additional question would be asked and con-ditional on no additional question being asked, the firm would be assessed as 𝑒𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡 85 percent of the time, and 𝑏𝑎𝑑 otherwise. If b) were answered to the initial question then thefirm would be flagged as 𝑏𝑎𝑑 . Finally, the prospective investor would invest following an 𝑒𝑥𝑐𝑒𝑙𝑙𝑒𝑛𝑡 rating, but not following 𝑏𝑎𝑑 . 38 .2 A Free Press that Censors Publicity is justly commended as a remedy for social and industrialdiseases. Sunlight is said to be the best of disinfectants.Justice Louis Brandeis
Next, we apply the ideas promulgated earlier to a basic regime change game in the spiritof Angeletos, Hellwig and Pavan (2006) [1]. There is an incumbent in power whose typeis a (real-valued) binary random variable 𝑋 , which takes values in the set {𝑥 𝑔 , 𝑥 𝑏 } , corre-sponding to “good" and “bad", respectively. We impose that 𝑥 𝑔 > 1 > 𝑥 𝑏 . There is singlerepresentative receiver–a median voter. The incumbent’s type is private information andthe receiver’s prior belief about the incumbent’s type, 𝜇 ∶= Pr(𝑋 = 𝑥 𝑏 ) , is correct and iscommon knowledge.The incumbent chooses 𝑟 ∈ [̄𝑟 , ̄𝑟 ] ⊂ (0, 1) which corresponds to the amount of (policy)frictions that impede regime change. That is, the higher 𝑟 the more costly it is for the receiverto attack the regime. Note that 𝑟 is directly payoff relevant for the receiver: the game is notsimple.There is news in this model: there is a binary signal 𝑠 ∈ {𝐺, 𝐵} , where 𝑏 ∶= Pr(𝐵|𝑥 𝑏 ) , and 𝑔 ∶= Pr(𝐵|𝑥 𝑔 ) . We suppose that the situation is one of “perfect bad news": . Thatis, bad news will never arise for a good incumbent. Formally, the receiver’s set of messagesis 𝑀 ∶= [̄𝑟 , ̄𝑟 ] × {𝐺, 𝐵} . The game proceeds as follows: first, the incumbent chooses 𝑟 ; second, signal 𝑠 is gener-ated and the receiver observes both 𝑠 and 𝑟 before choosing action 𝑎 ∈ {0, 1} , correspondingto “not attack" or “attack" the regime, respectively; and third, the incumbent observes the Angeletos, Hellwig and Pavan (2006) [1], as do many other papers that look at games of regime change,model the scenario as a global game . Of course, arguably the most compelling aspect of a global game is thecoordination problem faced by the populace. However, here this issue is moot: the mapping 𝜋 eliminatespossible coordination problems; and moreover, we are searching for the optimal equilibrium for the receiver. Note that in the proof of Theorem 3.3 the realized message was a deterministic outcome of the receiver’schoice. Here, the realized message is a random outcome of the receiver’s choice. Nevertheless, it is simpleto verify that the result continues to hold–this more general formulation was not included in the paper sinceit adds cumbersome notation and is analogous to the simpler, more concise, version that this paper containsinstead. 𝑑 ∈ {0, 1} . The incumbent’s utilityfunction is 𝑈 𝐼 = (1 − 𝑑)(𝑥 − 𝑎) − 𝑟 If the incumbent is bad, an attack will be successful and the incumbent will choose 𝑑 = 0 .Conversely, if an incumbent is good, an attack will never be successful, though it will becostly. The receiver’s payoffs are straightforward: if an attack is successful, the receiverobtains payoff and if an attack is unsuccessful, the receiver obtains payoff −𝑟 . If thereceiver does not attack, she gets .We impose that 𝑥 𝑏 > ̄𝑟 − ̄𝑟 and that 𝑏 satisfies 𝑏 < 𝑥 𝑏 − ( ̄𝑟 − ̄𝑟 )𝑥 𝑏 i.e. that ̄𝑟 − ̄𝑟 < 𝑥 𝑏 (1 − 𝑏) , which two conditions ensure that there are no separating equilibria.Indeed, suppose there is such an equilibrium, in which type 𝑥 𝑏 chooses some 𝑟 and type 𝑥 𝑏 chooses some 𝑟 ′ ≠ 𝑟 . Type 𝑥 𝑏 ’s incentive constraint reduces to 𝑟 ′ − 𝑟 ≥ 𝑥 𝑏 , which is neversatisfied under our imposition. We also impose that 𝜇 (1 − 𝑏)(1 − ̄𝑟 ) + (1 − 𝜇 )(−̄𝑟 ) < 0 ( )which ensures that there is a pooling equilibrium in which both types of incumbent pool on ̄𝑟 . Indeed, in such an equilibrium, following news 𝐵 , the receiver will attack, and followingnews 𝐺 , the receiver will not attack. It is easy to see; moreover, that this is the equilibriumthat maximizes the receiver’s payoff, and she obtains 𝑉 = 𝑏(1 − ̄𝑟 ) .On the other hand, the receiver’s commitment solution, and hence the optimal degree oftransparency (via Theorem 3.3) is given by the direct signal 𝜋 ∶ 𝑀 → Δ {0, 1} , where 𝜋 (1|( ̄𝑟 , 𝐺)) = 0, 𝜋 (1|(̄𝑟 , 𝐺)) = ̄𝑟 − ̄𝑟𝑥 𝑏 (1 − 𝑏) , and 𝜋 (1|(𝑚, 𝑠)) = 1, ∀(𝑟 , 𝑠) ≠ ( ̄𝑟 , 𝐺), (̄𝑟 , 𝐺) In the receiver-optimal equilibrium under this degree of transparency the senders separate:40ender 𝑥 𝑏 chooses ̄𝑟 and sender 𝑥 𝑔 chooses ̄𝑟 . The full derivation is contained in AppendixC.2.In this setting, one natural interpretation of the signal 𝜋 is as the actions or reporting ofthe “free press". That is, the specific 𝜋 described above can be viewed in general as particularequilibrium play by a neutral mediator, which in the political setting naturally correspondsto the neutral press. Such a press observes the level of frictions chosen by the sender ( 𝑟 ) anda news event ( 𝐺 or 𝐵 ) and then reports on those two things to the populace.Hence, in this scenario, the populace is better off if the press occasionally obfuscates.With all due respect to Justice Brandeis, sunlight may not be the best disinfectant (at leastin general). Occasionally, even though the incumbent has done something that only a badincumbent would do (chosen ̄𝑟 ), the free press should give him a pass and not report it to thepopulace. The American Economic Association (AEA) has a mechanism for the academic job marketin which each job market candidate can select up to two schools to designate as recipientsof a “signal" (henceforth a “wave", in order to keep things clear) expressing the candidate’sinterest in being interviewed at the Allied Social Science Associations (ASSA) meetings inJanuary. We explore a simplified version of this problem.In order to avoid the problem of waving to multiple receivers, a scenario beyond the scopeof this paper, we suppose that there is just a single school, and one candidate with a singlewave at his access. The candidate has a type, random variable Θ , which takes values in theunit interval, [0, 1] . Both school and the candidate share a common prior over the candidate’stype, which has full support on [0, 1] and is distributed according to absolutely continuouscdf 𝐹 (𝜃) ∶= Pr (Θ ≤ 𝜃) .The school’s decision is binary: interview the candidate ( 𝑖 ) or not ( 𝑛 ). The school obtainsa payoff of 𝜃 if they interview a candidate with type Θ = 𝜃 , and the school has an outsideoption 𝑑 ∈ (0, 1) . The candidate has value 𝑥(𝜃) of being interviewed by the school where 𝑥 ∶ Θ → [0, 1] is increasing and twice continuously differentiable on [0, 1] . Moreover,41 (0) = 0 and 𝑥(1) = 1 .The candidate has a simple choice as well: wave to the school ( 𝑤 ) or not ( 𝑟 ). There is acost 𝑘 ∈ (0, 1) to the candidate of waving to the school–think of this as an opportunity cost :if the candidate waves to this school, he cannot wave to another.We search for the commitment solution. By Lemma 3.2 we may restrict the candidatetypes to pure strategies. Moreover, we can fully describe the commitment strategy, 𝜋 , withvariables 𝑝 and 𝑞 where 𝑝 ∶= 𝜋 (𝑖|𝑤), and 𝑞 ∶= 𝜋 (𝑖|𝑟 ) Let’s look for an equilibrium in which positive measures of types choose each message (re-member, the sender’s message is a choice of wave or not). All types 𝜃 who choose 𝑤 havethe following IC constraint: 𝑝𝑥(𝜃) ≥ 𝑞𝑥(𝜃) + 𝑘 Hence, we must have 𝑝 > 𝑞 . Rearranging this, we have (𝑝 − 𝑞) 𝑥(𝜃) ≥ 𝑘
Likewise, all types 𝜃 ′ who choose 𝑟 have the following IC constraint (𝑝 − 𝑞) 𝑥(𝜃 ′ ) ≤ 𝑘 Claim 5.1.
If there is an equilibrium in which some type ̂𝜃 chooses 𝑤 and some type 𝜃 † chooses 𝑟 , then all 𝜃 ≥ ̂𝜃 must choose 𝑤 and all 𝜃 ≤ 𝜃 † must choose 𝑟 .Proof. Let type ̂𝜃 choose 𝑤 , and let 𝜃 be any type strictly greater than ̂𝜃 . Then, we have (𝑝 − 𝑞) 𝑥(𝜃) > (𝑝 − 𝑞) 𝑥( ̂𝜃) ≥ 𝑘 since 𝑥(⋅) is increasing and 𝑝 − 𝑞 > 0 . Hence, 𝜃 must choose 𝑤 . The analogous proceduresuffices for types that are choosing 𝑟 . ■ Remember, in this paper, sender types are male and the receiver is female. ̂𝜃 ; where all 𝜃 < ̂𝜃 choose 𝑟 , all 𝜃 > ̂𝜃 choose 𝑤 , and ̂𝜃 himself is indifferent. Note that because 𝑘 > 0 , there must alwaysbe some positive measure of types in equilibrium who strictly prefer and hence choose 𝑟 , nomatter the signal. The receiver’s payoff is 𝑉 = ∫ ̂𝜃0 (𝑞𝜃 + (1 − 𝑞)𝑑) 𝑑𝐹 (𝜃) + ∫ (𝑝𝜃 + (1 − 𝑝)𝑑) 𝑑𝐹 (𝜃) and the IC constraints are (𝑝 − 𝑞) 𝑥(𝜃) ≤ 𝑘 for all 𝜃 < ̂𝜃 , and (𝑝 − 𝑞) 𝑥(𝜃) ≥ 𝑘 for all 𝜃 > ̂𝜃 . The receiver’s payoff can be rewritten as
𝑉 = 𝑞 ∫ ̂𝜃0 (𝜃 − 𝑑) 𝑑𝐹 (𝜃) + 𝑝 ∫ (𝜃 − 𝑑) 𝑑𝐹 (𝜃) + 𝑑
Because the IC constraint of type ̂𝜃 must bind, 𝑝 = 𝑘𝑥 ( ̂𝜃) + 𝑞 Likewise, since the objective is linear in 𝑝 and 𝑞 , the optimal signal, 𝜋 , is described by either 𝑝 = 1, and 𝑞 = 1 − 𝑘𝑥 ( ̂𝜃) ( )or 𝑞 = 0, and 𝑝 = 𝑘𝑥 ( ̂𝜃) ( )Consequently, the optimal commitment solution (and hence the optimal information struc-ture) is either i. given by a signal 𝜋 , with 𝑝 and 𝑞 described as in the two pairs of equations and , and the corresponding ̂𝜃 that maximizes 𝑉 ; or ii. given by an equilibrium in whichthe receiver obtains the pooling payoff (which she can always obtain trivially).43ow, let’s examine the example with the following parameters and functional forms. Let 𝐹 be the uniform distribution, 𝑑 be , 𝑘 be and 𝑥(𝜃) = 𝜃 . Then, the value functionreduces to 𝑉 = 𝑞 ̂𝜃 Substituting in 𝑝 = 1 and 𝑞 = 1 − 1/(3 ̂𝜃) , we have
𝑉 = (1 − 13 ̂𝜃 ) ̂𝜃 which reduces to 𝑉 = − 3 ̂𝜃 − 1318
Hence ̂𝜃 = 1/3 , 𝑝 = 1 and 𝑞 = 0 . This yields the receiver a payoff of .On the other hand, if 𝑞 = 0 and 𝑝 = 1/(3 ̂𝜃) , we have 𝑉 = ( 13 ̂𝜃 ) 1 − ̂𝜃 which reduces to 𝑉 = − ̂𝜃6 − 118 ̂𝜃 + 89
This expression is maximized at ̂𝜃 = 1/√3 . Hence 𝑝 = 1/√3 , 𝑞 = 0 , and the receiver obtains apayoff of (8 − √3) /9 > 2/3 .Because of the digital format of the AEA mechanism, the optimal signal 𝜋 can be imple-mented quite naturally: via an email filter. Hence, with this formulation of the problem andin this parameter universe, schools would do better by using a filter that only reveals thecandidate’s wave of the time. As a result, upon receiving a wave from the candidate,the school will grant him an interview. If no such wave arrives, then the school will not. The main feature of the optimal information structure, or degree of transparency, in simplesignaling games with two actions is that the receiver-optimal equilibrium under the optimaldegree of transparency enables the receiver to achieve the same payoff as in the case whereinshe had the power of committing ex ante to a (mixed) strategy conditioned on the signal44hoice of the sender. This is both remarkable, that for a broad class of games informationdesign is as good as commitment; and useful, since the receiver’s commitment problem ismerely a linear program.We also find that in simple games with two actions, information design guarantees thatvalue of information is always positive–the receiver always benefits from ex ante informa-tion, no matter the form it takes. This contrasts sharply with the findings in Whitmeyer(2019) [33], where we discover that with full transparency information may hurt the receiver,even in binary action simple games.
References [1] GeorgeMarios Angeletos, Christian Hellwig, and Alessandro Pavan. Signaling in aglobal game: Coordination and policy traps.
Journal of Political Economy , 114(3):452–484, 2006.[2] Vladimir Asriyan, William Fuchs, and Brett Green. Information aggregation in dynamicmarkets with adverse selection.
Mimeo , April 2017.[3] Ian Ball. Scoring strategic agents. arXiv e-prints 1909.01888 , sep 2019.[4] David Blackwell. Comparison of experiments. In
Proceedings of the Second BerkeleySymposium on Mathematical Statistics and Probability , pages 93–102, Berkeley, Calif.,1951. University of California Press.[5] Andreas Blume, Oliver J. Board, and Kohei Kawamura. Noisy talk.
Theoretical Eco-nomics , 2(4):395–440, 2007.[6] Andreas Blume, Ernest K. Lai, and Wooyoung Lim. Eliciting private information withnoise: The case of randomized response.
Games and Economic Behavior , 113:356 – 380,2019.[7] Andreas Blume, Ernest K. Lai, and Wooyoung Lim. Mediated talk: An experiment.
Mimeo , 2019. 458] Raphael Boleslavsky and Kyungmin Kim. Bayesian persuasion and moral hazard.
Mimeo , February 2017.[9] In-Koo Cho and David M. Kreps. Signaling games and stable equilibria.
The QuarterlyJournal of Economics , 102(2):179–221, 1987.[10] Vincent P. Crawford and Joel Sobel. Strategic information transmission.
Econometrica ,50(6):1431–1451, 1982.[11] Laura Doval and Vasiliki Skreta. Constrained Information Design: Toolkit.
ArXiv e-prints , November 2018.[12] Francoise Forges. An approach to communication equilibria.
Econometrica , 54(6):1375–1385, 1986.[13] Francoise Forges. Equilibria with communication in a job market example.
The Quar-terly Journal of Economics , 105(2):375–398, 1990.[14] Esther Gal-Or. Warranties as a signal of quality.
The Canadian Journal of Economics , 22(1):50–61, 1989.[15] Chirantan Ganguly and Indrajit Ray. Simple mediation in a cheap-talk game.
Mimeo ,2011.[16] George Georgiadis and Balász Szentes. Optimal monitoring design.
Mimeo , 2018.[17] Maria Goltsman, Johannes Hörner, Gregory Pavlov, and Francesco Squintani. Media-tion, arbitration and negotiation.
Journal of Economic Theory , 144(4):1397 – 1420, 2009.[18] Sanford J. Grossman. The informational role of warranties and private disclosure aboutproduct quality.
The Journal of Law and Economics , 24(3):461–483, 1981.[19] Maxim Ivanov. Communication via a strategic mediator.
Journal of Economic Theory ,145(2):869 – 884, 2010.[20] Emir Kamenica and Matthew Gentzkow. Bayesian persuasion.
The American EconomicReview , 101(6):2590–2615, 2011. 4621] Michael Lachmann and Carl T. Bergstrom. Signalling among relatives: ii. beyond thetower of babel.
Theoretical Population Biology , 54(2):146 – 160, 1998.[22] Maël Le Treust and Tristan Tomala. Persuasion with limited communication capacity.
ArXiv e-prints , November 2017.[23] Hayne E. Leland and David H. Pyle. Informational asymmetries, financial structure,and financial intermediation.
The Journal of Finance , 32(2):371–387, 1977.[24] Paul Milgrom and John Roberts. Price and advertising signals of product quality.
Journalof Political Economy , 94(4):796–821, 1986.[25] Roger B. Myerson. Multistage games with communication.
Econometrica , 54(2):323–358, 1986.[26] Roger B. Myerson.
Game Theory: Analysis of Conflict . Harvard University Press, 1991.[27] Phillip Nelson. Advertising as information.
Journal of Political Economy , 82(4):729–754,1974.[28] Armin Rick. The benefits of miscommunication.
Mimeo , November 2013.[29] R. Tyrrell Rockafellar.
Convex Analysis . Princeton University Press, 1970.[30] Andrés Salamanca. The value of mediated communication.
Mimeo , 2016.[31] Michael Spence. Job market signaling. In Peter Diamond and Michael Rothschild, edi-tors,
Uncertainty in Economics , pages 281 – 306. Academic Press, 1978.[32] Stanley L. Warner. Randomized response: A survey technique for eliminating evasiveanswer bias.
Journal of the American Statistical Association , 60(309):63–69, 1965.[33] Mark Whitmeyer. In simple communication games, when does ex ante fact-findingbenefit the receiver? arXiv e-print , September 2019.[34] Mark Whitmeyer. Strategic inattention in the sir philip sidney game. bioRxiv , 2019.[35] Weijie Zhong. Information Design Possibility Set.
ArXiv e-prints , April 2018.47
Section 3 Proofs
A.1 Proposition 3.5 Proof
There are 𝑛 states, 𝑡 messages and actions for the receiver. From Lemma 3.2, in searchingfor the optimal commitment solution it is without loss to restrict our search to those in whicheach type of sender chooses a pure strategy. Hence, we may write the receiver’s objectivefunction as 𝑉 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝜋 𝑖 𝑣 𝑖𝑗(𝑖) + (1 − 𝜋 𝑖 ) 𝑤 𝑖𝑗(𝑖) ) where 𝑗(𝑖) is the message chosen by sender 𝑖 , and 𝜋 𝑖 is the probability the receiver plays 𝑎 after message 𝑗(𝑖) . The obedience constraints are 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )𝜋 𝑖 𝑣 𝑖𝑗(𝑖) ≥ 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )𝜋 𝑖 𝑤 𝑖𝑗(𝑖)𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )(1 − 𝜋 𝑖 )𝑤 𝑖𝑗(𝑖) ≥ 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )(1 − 𝜋 𝑖 )𝑣 𝑖𝑗(𝑖) provided both actions are recommended with positive probability.Since, absent strategic concerns, each sender type prefers 𝑚 , if the receiver chooses notransparency then each sender type will choose 𝑚 . Moreover without loss of generalitysuppose that in such an equilibrium the receiver’s optimal action is 𝑎 ; hence, ∑ 𝑛𝑖=1 𝜇(𝜃 𝑖 )𝑣 𝑖1 ≥∑ 𝑛𝑖=1 𝜇(𝜃 𝑖 )𝑤 𝑖1 .There are three cases to consider; 1a, 1b, and 2. Case 1:
Each action is optimal for somestate, message combination; and
Case 1a:
In the commitment solution, each action is rec-ommended with positive probability. Then, the receiver must be able to do (at least weakly)better than the no transparency equilibrium in her commitment solution: 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝜋 𝑖 𝑣 𝑖𝑗(𝑖) + (1 − 𝜋 𝑖 ) 𝑤 𝑖𝑗(𝑖) ) ≥ 𝑛 ∑ 𝑖=1 𝜇(𝜃 𝑖 )𝑤 𝑖1 Or, 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )𝜋 𝑖 𝑣 𝑖𝑗(𝑖) ≥ 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )𝜋 𝑖 𝑤 𝑖𝑗(𝑖) + 𝑛 ∑ 𝑖=1 𝜇(𝜃 𝑖 ) (𝑤 𝑖1 − 𝑤 𝑖𝑗(𝑖) )≥ 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )𝜋 𝑖 𝑤 𝑖𝑗(𝑖) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝜋 𝑖 𝑣 𝑖𝑗(𝑖) + (1 − 𝜋 𝑖 ) 𝑤 𝑖𝑗(𝑖) ) ≥ 𝑛 ∑ 𝑖=1 𝜇(𝜃 𝑖 )𝑣 𝑖1 Or, 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )(1 − 𝜋 𝑖 )𝑤 𝑖𝑗(𝑖) ≥ 𝑛 ∑ 𝑖=1 𝜇(𝜃 𝑖 ) (𝑣 𝑖1 − 𝜋 𝑖 𝑣 𝑖𝑗(𝑖) )≥ 𝑛 ∑ 𝑖 𝜇(𝜃 𝑖 )(1 − 𝜋 𝑖 )𝑣 𝑖𝑗(𝑖) where the last line follows from Condition 3.4. Thus, the second obedience constraint is alsoalways satisfied under the commitment solution. Case 1b:
Each action is not recommended with positive probability in the commitmentsolution. Following the recommendation of the on-path action, the proof is identical to thatof case 1a. Following the recommendation of the off-path action, the receiver’s belief isundefined. However, since there is some message/state combination in which the off-pathaction is optimal, there is always a belief that we can stipulate such that the directive to playthe off-path action would be obeyed.
Case 2:
For all state and message combinations only one action is optimal, say action 𝑎 .Then, since there is an equilibrium under no transparency in which the senders pool on thereceiver’s favorite message, 𝑚 , this must be the optimal commitment solution as well (thebest message and the best action). Moreover, this must satisfy obedience as well since thedirective to choose 𝑎 will never be sent.The result is proved: the commitment solution coincides with the optimal transparencysolution. A.2 Proposition 3.6 Proof
Proof.
Consider any two action, simple, signaling game with payoffs denoted by 𝑣 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) and 𝑤 𝑖 ∶= 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) ; and let there exist some 𝜃 𝑖 such that 𝑣 𝑖 > 𝑤 𝑖 and some 𝜃 𝑘 such that 𝑤 𝑘 > 𝑣 𝑘 . Moreover, let 𝑉 (𝜇 ) > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 ( 𝐴1 )49nd 𝑉 (𝜇 ) > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 ( 𝐴2 )By construction, from Inequalities 𝐴1 and 𝐴2 the obedience constraints must be slack(simply rearrange them as in the proof of Theorem 3.3), i.e. 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 𝜋 𝑖 > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 𝜋 𝑖 ( 𝐴3 ) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 (1 − 𝜋 𝑖 ) > 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 (1 − 𝜋 𝑖 ) ( 𝐴4 )Or, 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 𝜋 𝑖 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 𝜋 𝑖 + 𝛿 ( 𝐴5 ) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 (1 − 𝜋 𝑖 ) = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 (1 − 𝜋 𝑖 ) + 𝛿 ( 𝐴6 )for constants 𝛿 > 0 and 𝛿 > 0 .Then, consider any non-simple perturbation of the game with 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑣 𝑖 +𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) and 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑤 𝑖 + 𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) . As in the proof of Proposition 3.5, denote by 𝑗(𝑖) the mes-sage chosen by sender 𝑖 . Hence, we may rewrite 𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) as 𝜖 𝑗(𝑖) and 𝜔(𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) as 𝜂 𝑗(𝑖) .Accordingly, 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) + 𝜖 𝑗(𝑖) = 𝑣 𝑖 + 𝜖 𝑗(𝑖) and 𝑢 𝑅 (𝑎 , 𝑚 𝑗 , 𝜃 𝑖 ) = 𝑢 𝑅 (𝑎 , 𝜃 𝑖 ) + 𝜂 𝑗(𝑖) = 𝑤 𝑖 + 𝜂 𝑗(𝑖) Hence, in the perturbed game, the obedience constraints, 𝐴5 and 𝐴6 , become 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝑣 𝑖 + 𝜖 𝑗(𝑖) ) 𝜋 𝑖 ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝑤 𝑖 + 𝜂 𝑗(𝑖) ) 𝜋 𝑖 ( 𝐴7 ) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝑤 𝑖 + 𝜂 𝑗(𝑖) ) (1 − 𝜋 𝑖 ) ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (𝑣 𝑖 + 𝜖 𝑗(𝑖) ) (1 − 𝜋 𝑖 ) ( 𝐴8 )Or, 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 𝜋 𝑖 ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 𝜋 𝑖 + 𝛾 ( 𝐴9 ) 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑤 𝑖 (1 − 𝜋 𝑖 ) ≥ 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑣 𝑖 (1 − 𝜋 𝑖 ) + 𝜎 ( 𝐴10 )50here 𝛾 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 )𝜋 𝑖 (𝜂 𝑗(𝑖) − 𝜖 𝑗(𝑖) ) , and 𝜎 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) (1 − 𝜋 𝑖 ) (𝜖 𝑗(𝑖) − 𝜂 𝑗(𝑖) ) Then, since 𝛿 and 𝛿 are positive constants, there must be some 𝜏 > 0 such that if |𝜔(𝑎 𝑙 , 𝑚 𝑗 , 𝜃 𝑖 )| ≤𝜏 for all 𝑎 𝑙 ∈ 𝐴 , for all 𝑚 𝑗 ∈ 𝑀 and for all 𝜃 𝑖 ∈ Θ then 𝛾 ≤ 𝛿 and 𝜎 ≤ 𝛿 ; and hence the obedi-ence constraints are satisfied.Of course, it remains to verify optimality in the perturbed game. However, recall that inthe commitment problem, for each vector of pure strategies of the sender types, the receiversolves a linear program. The constraint set is a convex polytope and hence the optimum mustlie at a vertex of this object. There are finitely many vertices, and finitely many messages,hence finitely many possible vectors of pure strategies. Thus, there are only finitely manycombinations of these things that could constitute an optimum. Without loss of generality,suppose that in the simple game, the optimum is unique. That is, the receiver’s payoff, 𝑣 𝑟∗ ,at the optimum is strictly better than her payoff from any other vertex, vector of messagescombination. Equivalently, 𝑣 𝑟 ∗ ∶= 𝑉 (𝜇 ) = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 𝑣 𝑖 + (1 − 𝜋 𝑖 ) 𝑤 𝑖 ] > 𝑣 𝑟 ( 𝐴11 )for all 𝑟 ∈ 𝑅 where 𝑅 is the (finite) set of all vertex, pure-strategy message vector combi-nations except for the optimal combination ( 𝑟 ∗ ), and 𝑣 𝑟 is the receiver’s payoff for vertex,pure-strategy message vector combination 𝑟 . Inequality 𝐴11 can be rewritten as 𝑣 𝑟 ∗ = 𝑣 𝑟 + 𝜄 𝑟 ( 𝐴12 )where for each 𝑟 , 𝜄 𝑟 > 0 is a constant. Now, consider the perturbed game, where for 𝑟 ∗ wehave ̂𝑣 𝑟 ∗ = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 (𝑣 𝑖 + 𝜖 𝑗(𝑖) ) + (1 − 𝜋 𝑖 ) (𝑤 𝑖 + 𝜂 𝑗(𝑖) )]= 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 𝑣 𝑖 + (1 − 𝜋 𝑖 ) 𝑤 𝑖 ] + 𝛽= 𝑣 𝑟 ∗ + 𝛽 where ̂𝑣 𝑟∗ is the receiver’s payoff in the perturbed game for the (previously optimal) 𝑟 ∗ and 𝛽 = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) [𝜋 𝑖 𝜖 𝑗(𝑖 ) + (1 − 𝜋 𝑖 ) 𝜂 𝑗(𝑖) ] 𝑟 , we have in the perturbed game ̂𝑣 𝑟 = 𝑣 𝑟 +𝛼 𝑟 . Hence, if 𝑣 𝑟 ∗ +𝛽 ≥ 𝑣 𝑟 +𝛼 𝑟 for all 𝑟 , then 𝑟 ∗ is optimal.Then, since 𝜄 𝑟 is a positive constant for any 𝑟 , there must be some 𝜌 > 0 such that if |𝜔(𝑎 𝑙 , 𝑚 𝑗 , 𝜃 𝑖 )| ≤ 𝜌 for all 𝑎 𝑙 ∈ 𝐴 , for all 𝑚 𝑗 ∈ 𝑀 and for all 𝜃 𝑖 ∈ Θ , then 𝛼 𝑟 − 𝛽 ≤ 𝜄 𝑟 for all 𝑟 ; andhence 𝑟 ∗ is optimal. Finally, define 𝜅 ∶= min {𝜏 , 𝜌} , and the result is shown.We have made the implicit assumption that each recommendation will be sent with pos-itive probability on path in the simple game. However, if that is not the case, then uponthe instruction to choose the off-path action, we may assign the receiver’s belief so that theaction is sequentially rational since if the perturbations are sufficiently small then there arebeliefs such that actions 𝑎 and 𝑎 are sequentially rational.The result is proved: the commitment solution coincides with the optimal transparencysolution provided the game is almost simple. ■ A.3 Opacity ≠ Commitment for Three or More Actions, in Detail
Here, we proceed through the modified Beer Quiche example of Section 3.2 in detail. In thecommitment problem, the receiver may commit to a behavioral strategy 𝜋 ∶ 𝑀 → Δ(𝐴) . Wemay write 𝜋 in the form of four probabilities, 𝑝 , 𝑟 , 𝑞 , and 𝑠 , with 𝑝 ∶= 𝜋 (𝑓 |𝐵) , 𝑟 ∶= 𝜋 (𝑛𝑓 |𝐵) , 𝑞 ∶= 𝜋 (𝑓 |𝑄) , and 𝑠 ∶= 𝜋 (𝑛𝑓 |𝑄) From Lemma 3.2 it is without loss of generality to restrict our search for the receiver-optimalcommitment equilibria to ones in which each sender type chooses a pure strategy. Moreover,it is clear that equilibria in which the senders pool will beget a payoff to the receiver of 𝜇 .Let’s consider the separating equilibrium in which the strong guy chooses beer and the wimp,quiche: 𝜎 𝑆 = 1 and 𝜎 𝑊 = 0 . Hence, the receiver’s problem can be rewritten as max 𝑝,𝑟,𝑞,𝑠 { 12 [1 + 𝜇 (𝑟 − 𝑝) + (1 − 𝜇 ) (𝑞 − 𝑠)]} such that
5𝑟 + 1 ≥ 4𝑠 (S1)52 𝑝 + 2 ≥ 5𝑞 (S2)We also have the (possibly slack) constraints that 𝑝 + 𝑟 ≤ 1 and 𝑞 + 𝑠 ≤ 1 . This isstraightforward to solve and for 𝜇 > 1/2 , 𝑝 = 0 , 𝑟 = 1 , 𝑞 = 2/5 , and 𝑠 = 0 . The correspondingpayoff is 𝑉 = 12 (1 + 𝜇 + (1 − 𝜇 ) 25 ) = 12 ( 75 + 35 𝜇 ) It is easy to verify that the other separating equilibrium is not as fruitful for the receiver.Of course, the question remains as to whether the receiver could achieve the same payoffunder some equilibrium in which the senders mix. We need this because we are providinga counter-example to opacity = commitment for three actions–otherwise we would leaveopen the possibility that there is a commitment solution in which some type(s) mix that canbe achieved through the optimal degree of transparency. However, it is simple to verify thatthere is no optimal equilibrium under commitment in which at least one sender type choosesa non-degenerate mixed strategy. That is, the equilibrium above, in which the strong typechooses beer, the wimp chooses quiche, and the receiver commits to 𝑝 = 𝑠 = 0 , 𝑟 = 1 and 𝑞 = 2/5 , is uniquely optimal. B Section 4 Proofs
B.1 Proposition 4.2 Proof
Proof.
Recall that from Lemma 3.2 it is without loss of generality in the receiver’s commit-ment problem to restrict the sender to pure strategies. Hence, the receiver solves max 𝜋,𝑠 { 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑅 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 )} such that 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑆 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 ) ≥ 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 ′𝑖 ) 𝑢 𝑆 (𝑚 ′𝑖 , 𝑎 𝑙 , 𝜃 𝑖 ) for all 𝜃 𝑖 , 𝑚 ′𝑖 , where 𝑠 is a vector of pure strategies chosen by the sender types and message 𝑚 𝑖 is the message chosen by type 𝜃 𝑖 (of course it is possible that 𝑚 𝑖 = 𝑚 𝑘 for 𝑘 ≠ 𝑖 if types 𝜃 𝑖 and 𝜃 𝑘 choose the same message). For a fixed vector of pure strategies this is a linear program.53aturally, the payoffs of the game may be such that for some vector of pure strategies,there exists no signal that is incentive compatible. Note also that because this is the com-mitment problem, incentive compatibility is independent of the prior 𝜇 . Next define set 𝐹 asthe set of all pairs of signals and strategy vectors that are incentive compatible, with element 𝑓 ∶= (𝜋 , 𝑠) . For any 𝑓 ∈ 𝐹 , we have 𝑉 𝑓 (𝜇 ) = 𝑛 ∑ 𝑖=1 𝜇 (𝜃 𝑖 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑅 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 ) and observe that 𝑉 𝑓 (𝜆𝜇 ′0 + (1 − 𝜆)𝜇 ′′0 ) = 𝑛 ∑ 𝑖=1 (𝜆𝜇 ′0 (𝜃 𝑖 ) + (1 − 𝜆)𝜇 ′′0 (𝜃 𝑖 )) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑅 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 )= 𝜆 𝑛 ∑ 𝑖=1 𝜇 ′0 (𝜃 𝑖 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑅 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 )+ (1 − 𝜆) 𝑛 ∑ 𝑖=1 𝜇 ′′0 (𝜃 𝑖 ) 𝑘 ∑ 𝑙=1 𝜋 (𝑎 𝑙 |𝑚 𝑖 ) 𝑢 𝑅 (𝑚 𝑖 , 𝑎 𝑙 , 𝜃 𝑖 )= 𝜆𝑉 𝑓 (𝜇 ′0 ) + (1 − 𝜆)𝑉 𝑓 (𝜇 ′′0 ) for any 𝜆 ∈ [0, 1] . Therefore, 𝑉 𝑓 is convex in 𝜇 .Furthermore, it is easy to see that 𝑉 ∗ (𝜇 ) ∶= 𝑚𝑎𝑥 𝜋,𝑠 𝑉 (𝜇 ) = max 𝑓 𝑉 𝑓 (𝜇 ) . Since 𝑉 𝑓 isconvex, epi 𝑉 𝑓 is also convex. Then, epi 𝑉 ∗ = epi max 𝑓 𝑉 𝑓 = ∩ 𝑓 ∈𝐹 epi 𝑉 𝑓 . Since the intersectionof convex sets is also convex, epi 𝑉 ∗ is also convex, hence 𝑉 ∗ (𝜇 ) is convex in 𝜇 . Note alsothat 𝑉 ∗ (𝜇 ) is continuous in 𝜇 and since it is a “proper convex function" (Rockafellar (1970)[29]) it must be differentiable almost everywhere (Theorem 25.5, Rockafellar [29]).Finally, by Jensen’s inequality, the receiver always (at least weakly) prefers to obtaininformation ex ante . ■ C Section 5 (Examples) Derivations
C.1 Investor Example (Section 5.1) Derivation
We first look for the optimal commitment strategy should the senders separate fully. Wemake the ansatz that type ’s IC constraint doesn’t bind, in which case we want to havehim choose 𝑟 to be as high as possible so that the other types have the least incentive to54mitate him: so chooses 𝑟 = 1 . Likewise, it is also clear that we want to have type choose his myopic optimum so that he has the least incentive to deviate: so chooses 𝑟 = 0 . Finally, we have choose 𝑟 ∈ (0, 1) .We summarize 𝜋 in terms of variables, 𝑘 , 𝑙 , and 𝑚 , where 𝑘 ∶= 𝜋 (𝑏𝑢𝑦|0), 𝑙 ∶= 𝜋 (𝑏𝑢𝑦|𝑟 ), and 𝑚 ∶= 𝜋 (𝑏𝑢𝑦|1) and the CEO maximizes 𝑉 = 13 𝑘 ( 310 − 710 ) + 13 𝑙 ( 35 − 710 ) + 13 𝑚 ( 910 − 710 ) such that and
35 √1 + 𝑟 + 25 √1 − 𝑟 + 35 𝑙 ≥ 35 √2 + 35 𝑚 where we assume (for now) that the other constraints are slack. It is straightforward to solvethis and obtain 𝑙 = 53 − 173√41 ≈ .78, 𝑚 = 53 − √2 + 4√41 ≈ .88, 𝑘 = 0, and 𝑟 = 4041 ≈ .98
Likewise, it is simple to verify that the other IC constraints hold. Moreover, we can stipulatethat following any off-path message the receiver is never recommended to buy the firm,and so the sender never wishes to deviate to an off path message. Thus, this constitutes anequilibrium.The payoff from this separating equilibrium is strictly higher than the pooling payoff, soit remains to check that there are no better equilibria in which two types of sender pool. Itis easy to verify that none exist that yield the receiver a payoff that is as high as she canget from full separation (although there is an equilibrium in which and pool that isalmost as good). Hence, we conclude that the 𝜋 described above, in conjunction with a fullyseparating equilibrium, is optimal. 55 .2 Regime Change Example (Section 5.2) Derivation This scenario satisfies the conditions for Proposition 3.5 to hold. Thus, opacity = commit-ment. Hence, let us solve the commitment problem. Suppose the incumbent types separate, 𝑥 𝑏 chooses 𝑟 and 𝑥 𝑔 chooses 𝑟 ′ > 𝑟 . The receiver commits to the following: she chooses with probability 𝑝 following (𝑟 , 𝐵) , with probability 𝑞 following (𝑟 , 𝐺) , with probability 𝑥 following (𝑟 ′ , 𝐵) , and with probability 𝑦 following (𝑟 ′ , 𝐺) .The receiver’s value function is 𝑉 = 𝜇 {𝑏𝑝(1 − 𝑟 ) + (1 − 𝑏)𝑞(1 − 𝑟 )} + (1 − 𝜇 )𝑦(−𝑟 ′ ) and the incentive compatibility constraints are 𝑏 [𝑝(−𝑟 ) + (1 − 𝑝)(𝑥 𝑏 − 𝑟 )] + (1 − 𝑏) [𝑞(−𝑟 ) + (1 − 𝑞)(𝑥 𝑏 − 𝑟 )]≥ 𝑏 [𝑥(−𝑟 ′ ) + (1 − 𝑥)(𝑥 𝑏 − 𝑟 ′ )] + (1 − 𝑏) [𝑦(−𝑟 ′ ) + (1 − 𝑦)(𝑥 𝑏 − 𝑟 ′ )] and [𝑦(𝑥 𝑔 − 1 − 𝑟 ′ ) + (1 − 𝑦)(𝑥 𝑔 − 𝑟 ′ )] ≥ [𝑞(𝑥 𝑔 − 1 − 𝑟 ) + (1 − 𝑞)(𝑥 𝑔 − 𝑟 )] Suppose that the second constraint is slack. The first constraint may be rewritten as 𝑥 𝑏 (1 − 𝑏)(𝑦 − 𝑞) + 𝑥 𝑏 𝑏(𝑥 − 𝑝) + 𝑟 ′ − 𝑟 ≥ 0 It is clear that we can set 𝑥 = 1 , and that this constraint must bind. Thus, 𝑞 = 𝑦 + 𝑏1 − 𝑏 (1 − 𝑝) + 𝑟 ′ − 𝑟𝑥 𝑏 (1 − 𝑏) We substitute in for 𝑞 into the value function and obtain 𝑉 = 𝜇 {(1 − 𝑏)𝑦(1 − 𝑟 ) + 𝑏(1 − 𝑟 ) + (𝑟 ′ − 𝑟 )𝑥 𝑏 (1 − 𝑟 )} + (1 − 𝜇 )𝑦(−𝑟 ′ ) The derivative of 𝑉 with respect to 𝑦 is 𝜇 (1 − 𝑏)(1 − 𝑟 ) + (1 − 𝜇 )(−𝑟 ′ ) < 𝜇 (1 − 𝑏)(1 − ̄𝑟 ) + (1 − 𝜇 )(−̄𝑟 ) < 0 where the second inequality follows from Inequality . Likewise, the derivative with respectto 𝑟 is −𝜇 ((1 − 𝑏)𝑦 + 𝑏 + 1 − 𝑟 + 𝑟 ′ − 𝑟𝑥 𝑏 ) < 0 𝑟 ′ is 𝜇 𝑏 − (1 − 𝜇 )𝑦 which is positive for 𝑦 sufficiently small. Thus, it is clear that 𝑉 is maximized at 𝑝 = 1 , 𝑦 = 0 , 𝑟 = ̄𝑟 , 𝑟 ′ = ̄𝑟 , and 𝑞 = ̄𝑟 − ̄𝑟𝑥 𝑏 (1 − 𝑏) Moreover, recall 𝑥 = 1 . The incentive compatibility constraint for 𝑥 𝑔 reduces to ( ̄𝑟 − ̄𝑟 ) (1 − 𝑥 𝑏 (1 − 𝑏)) ≥ 0 which obviously holds. Deviations to off-path 𝑟 can be neutralized by imposing that thereceiver is recommended to overthrow should any 𝑟 other than ̄𝑟 or ̄𝑟 be chosen.It is easy to verify that this equilibrium results in a strictly higher payoff than any pool-ing equilibrium, or any separating equilibrium in which 𝑟 ′ < 𝑟< 𝑟