Cross-verification and Persuasive Cheap Talk
CCross-verification and Persuasive Cheap Talk ∗Alp Atakan † & Mehmet Ekmekci ‡ & Ludovic Renou § March 1, 2021
Abstract
We study a cheap-talk game where two experts first choose what informationto acquire and then offer advice to a decision-maker whose actions affect the wel-fare of all. The experts cannot commit to reporting strategies. Yet, we show thatthe decision-maker’s ability to cross-verify the experts’ advice acts as a commit-ment device for the experts. We prove the existence of an equilibrium, where anexpert’s equilibrium payoff is equal to what he would obtain if he could committo truthfully revealing his information.
Keywords : Bayesian persuasion, information design, commitment, cheaptalk, multiple experts
JEL Classification Numbers : C73, D82 ∗ Ludovic Renou gratefully acknowledges the support of the Agence Nationale pour la Rechercheunder grant ANR CIGNE (ANR-15-CE38-0007-01) and through the ORA Project “Ambiguity in Dy-namic Environments” (ANR-18-ORAR-0005). † Koc University and QMUL. Atakan’s work on this project was supported by a grant from theEuropean Research Council (ERC 681460, InformativePrices). ‡ Boston College § QMUL and CEPR a r X i v : . [ ec on . T H ] F e b Introduction
Decision-makers routinely solicit advice from experts, whose interests are not alwaysperfectly aligned with the decision-makers’ ones. A decision-maker may thus opt toconsult multiple experts to detect self-serving advice. Consulting numerous expertsallows the decision-maker to check the veracity of the advice he receives since he cancompare one expert’s recommendation with another’s, i.e., the decision-maker engagesin cross-verification . This paper studies how cross-verification affects communication.Cross-verification’s effectiveness depends on the experts’ information. If expertshave perfectly correlated information, then inconsistent recommendations from expertsdefinitively indicate untruthful, self-serving advice. Alternatively, if experts have un-correlated information, then cross-verification cannot detect misleading advice. Thus,if the experts strategically acquire information, their choices will affect the scope forcross-verification. This paper sheds light on this interplay by studying a cheap-talkmodel, where experts independently acquire information before providing advice to adecision-maker.In the game that we study, two experts, with identical preferences, first choosestatistical experiments that provide information about an unknown state of the world,privately observe their experiments’ outcomes, and then offer private reports to thedecision-maker. The decision-maker collects all the reports and chooses an action.As a benchmark, suppose that the experts could commit to revealing their exper-iment’s outcomes truthfully. Following Kamenica and Gentzkow (2011), we call theexperiment that would be chosen the expert-optimal experiment. In our model, how-ever, the experts cannot commit. Yet, we show that there exists an equilibrium, whereboth experts choose the expert-optimal experiment and truthfully report the outcomesof their experiments. In equilibrium, the experts optimally select experiments, whichenable cross-verification to be most effective. In turn, cross-verification facilitates truth-ful communication and, thus, allows the experts to receive their best possible payoff.In other words, cross-verification acts a commitment device.The existence of such an equilibrium relies on three essential properties. First, weassume that the experts are free to choose any statistical experiment. In particular,they can choose to correlate their experiments’ outcomes perfectly and thus allow thedecision-maker to cross-verify their reports perfectly (see Green and Stokey (1978) andGentzkow and Kamenica (2016, 2017)). Second, suppose one expert deviates fromreporting the experiment’s outcome truthfully while the other is truthful. In this case,the decision-maker detects a deviation as the two reports are inconsistent. However, thedecision-maker cannot deduce the deviator’s identity. Third, we show that a uniform
Related literature.
This paper is related to the literature on cheap-talk pioneeredby Crawford and Sobel (1982) and several papers in this literature study communi-cation with multiple experts. In particular, Krishna and Morgan (2001a) focus on amodel where the experts are perfectly informed and show that there is an equilibriumwhere the experts truthfully reveal the state if the experts send messages simultane-ously. In contrast, Krishna and Morgan (2001b) prove that such an equilibrium doesnot exist if the experts send messages sequentially. Battaglini (2002) shows that thedecision-maker can learn a multidimensional state by consulting experts about differentdimensions. Our work differs from these articles in several respects: first, we assume3hat the experts choose what kind of information to acquire while the previous papersassume that the experts perfectly know the state. This is an important distinction sincethe experts’ information affects the scope for cross-verification. Second, these papers,except for Battaglini (2002), assume that all agents have quadratic-utility. In contrast,we put no restrictions on the utility functions. Third, our main result shows that theexperts obtain their commitment payoff while the cheap-talk literature is predominantlyinterested in full information revelation.The survey by Sobel (2013) also discusses how cross-verification ensures truth-tellingin the context of multi-sender cheap-talk games. The argument provided in this surveyrelies on the existence of an arbitrarily-harsh exogenously-given punishment for devia-tions from truthful reporting. Instead, we show that a uniform punishment relative tothe optimal experiment always exists. Our paper is also related to the following works that focus on single-expert cheap-talk games: Lyu (2020) characterizes the equilibrium set in a model where the expertacquires information before providing advice. Lipnowski (2020) shows that an expertcan obtain his commitment payoff if the expert’s value function is continuous. Instead,we focus on a model with multiple experts and show that the experts receive theircommitment payoff, without making any assumptions on their payoff’s function.Finally, this paper is closely related to the literature on Bayesian persuasion (Ka-menica and Gentzkow (2011)). A number of articles that include Au and Kawai (2020),Gentzkow and Kamenica (2016, 2017), Koessler et al. (2018) and Li and Norman (2018,2020) study persuasion with multiple experts. In all of these papers, the experts cancommit to revealing their information truthfully. In contrast, we assume that the ex-perts’ recommendations are cheap-talk, i.e., we require sequential rationality at everystage of the game. Our result shows that the experts can achieve their commitmentpayoff even though they cannot commit to revealing their information. For a recentsurvey of the literature on Bayesian persuasion, we refer to Kamenica (2019). With quadratic utility and like-biased experts, the expert-optimal information structure coincideswith the decision-maker’s and entails choosing the perfectly informative experiment. Therefore, ourresult also implies that full information revelation is an equilibrium in this particular case, i.e., werecover the result of Krishna and Morgan (2001a). However, with other utility specifications, theexpert-optimal information structure need not coincide with the decision-maker’s. See Wolinsky (2002), and Gilligan and Krehbiel (1989) for related work. The value function describes the expert’s highest expected payoff at a given belief conditional onthe decision-maker choosing a best-reply to that belief. Continuity of the value function is a very strongassumption. E.g., with two states and two actions, it requires the expert to be indifferent between thetwo actions whenever the decision-maker is. The Model
We study a cheap-talk game between two experts, labelled 1 and 2, and a decision-maker. The experts provide information to the decision-maker about a payoff-relevantstate ω ∈ Ω , who then chooses an action a ∈ A . The sets A and Ω are finite. Theexperts have identical preferences. An expert’s payoff is u ( a, ω ) when the decision-makerchooses action a and the state is ω . (We discuss the assumption of common preferencesin the next section.) The decision-maker’s payoff is v ( a, ω ) . Initially, neither the expertsnor the decision-maker knows the state. The common prior probability that the stateis ω is π ◦ ( ω ) .We first provide an informal description of the cheap-talk game. The game hasthree stages. In the first stage, the two experts simultaneously choose a statisticalexperiment. The selected experiments are publicly observed. In the second stage, eachexpert privately observes his experiment’s outcome and then sends a message to thedecision-maker. In the third stage, the decision-maker observes the experts’ messagesand chooses an action.We now provide a formal description. To model the choice of statistical experiments,we follow Gentzkow and Kamenica (2016, 2017). These authors define a statisticalexperiment σ as a partition of Ω × [0 , into finitely many (Lebesgue) measurablesubsets of Ω × [0 , . A signal s is an element of the partition σ , i.e., a measurablesubset of Ω × [0 , . The probability of signal s ∈ σ conditional on ω is the (Lebesgue)measure of the set { x ∈ [0 ,
1] : ( ω, x ) ∈ s } . Throughout, we omit the dependence on theexperiment σ , and write λ s for the probability of the signal s and π s for the posteriorprobability. We denote the set of experiments that the experts can choose from by Σ .In the first stage, expert i thus chooses an experiment σ i ∈ Σ . The chosen exper-iments ( σ , σ ) are publicly observed. In the second stage, expert i privately observesthe realization s i ∈ σ i and sends a private message m i ∈ M i to the decision-maker. Weassume that the sets of messages are rich enough to communicate any signal realiza-tions. Finally, the decision-maker observes the messages ( m , m ) (but not the realizedsignals ( s , s ) ) and chooses an action a . We denote Γ( π ◦ , u, v ) the cheap-talk game.Note that different extensive-form games are consistent with our description. Through-out, we assume that the state ( ω, x ) ∈ Ω × [0 , is chosen by Nature according to theprobability distribution π ◦ × U [0 , after the experts have chosen their experimentswhere U [0 , denotes the uniform distribution on the unit interval. Thus, we have aproper sub-game after each choice of statistical experiments ( σ , σ ) .A strategy for expert i is a pair ( σ i , τ i ) , where σ i ∈ Σ and τ i ( σ i , σ j , s i ) ∈ ∆( M i ) forall ( σ i , σ j , s i ) with s i ∈ σ i . A strategy for the decision-maker specifies a mixed action5 ( σ i , σ j , m i , m j ) ∈ ∆( A ) for all ( σ i , σ j , m i , m j ) . The solution concept is weak perfectBayesian equilibrium. We stress that this requires the beliefs to be consistent with thechosen experiments ( σ , σ ) even if these experiments are off the equilibrium path.Few remarks are worth making. First, as in classical cheap-talk games, none of theexperts can commit to reporting strategies. Second, if the experiments are ( σ , σ ) ,then the joint probability of ( s , s ) ∈ σ × σ conditional on ω is the measure of theset { x : ( ω, x ) ∈ s ∩ s } . Thus, if both experts choose the same experiment σ , thenthe probability of ( s, s (cid:48) ) ∈ σ × σ is zero, whenever s (cid:54) = s (cid:48) . (To see this, note that if s (cid:54) = s (cid:48) , then s ∩ s (cid:48) = ∅ since σ is a partition.) In words, if both experts choose thesame experiment, their realized signals are perfectly correlated. This property will turnout to be crucial. An alternative modeling is to assume that there is a fixed set ofstatistical experiments and let the experts observe the realization of the experiment oftheir choices. This alternative modeling also implies that if the two experts choose toobserve the same experiment’s realization, their observations are identical. Lastly, it isusual to model statistical experiments as probability kernels σ ∗ : Ω → ∆( S ) , where S isthe (finite) set of signals. The latter formulation naturally implies the former: for each ω , we can partition [0 , into | S | non-empty and disjoint intervals such that the lengthof the s -th interval is σ ∗ ( s | ω ) when the state is ω . With a slight abuse of notation, weidentify the probability kernel σ ∗ with that particular partition of Ω × [0 , .We focus on truthful equilibria , in which the two experts choose the same ex-periment in the first stage and truthfully report the common signal realization in thesecond stage.In what follows, we denote by v ( α, π ) the decision-maker’s expected payoff when hechooses the mixed action α and his belief is π , and we denote by BR ( π ) := { α ∈ ∆( A ) : v ( α, π ) ≥ v ( α (cid:48) , π ) , ∀ α (cid:48) ∈ ∆( A ) } the set of decision-maker’s best-replies at π . Similarly,we write u ( α, π ) for an expert’s expected payoff. In this section, we show that the ability of the decision-maker to cross-verify informationserves as a commitment device for the experts. More precisely, we show that there exists To ease exposition, we do not explicitly consider randomizations over the choices of experiments.This does not affect any of our results. Note, however, that we can allow for the experts to choose identical and independent experimentswithout affecting our results. To do so, it suffices to define an experiment as a finite partition of Ω × [0 , × [0 , , with ( ω, x, y ) distributed according to π ◦ × U ([0 , × U ([0 , . Intuitively, if the expertscondition their random observations on x , they are perfectly correlated, while they are independent ifone expert conditions on x and the other on y .
6n equilibrium of the cheap-talk game, in which the experts obtain their commitmentvalue .We define the commitment value as the highest payoff an expert can obtain when hecommits to truthfully disclose the realized signal, as in games of Bayesian persuasion.Formally, consider the persuasion game, where an expert first chooses a statisticalexperiment σ : Ω → ∆( S ) , commits to truthfully reveal the realized signal s to thedecision-maker, who then makes a decision. Kamenica and Gentzkow (2011) prove thatthe best equilibrium payoff for the expert in this game is given by cav u ( π ) , wherecav u is the concavification of u and u ( π ) := max α ∈ BR ( π ) u ( α, π ) . (See also Aumannand Maschler, 1995.) For later reference, we write ( λ ∗ s , π ∗ s ) s ∈ S for an optimal splittingof the prior π ◦ , that is, (cid:80) s ∈ S λ ∗ s u ( π ∗ ) = cav u ( π ◦ ) and (cid:80) s ∈ S λ ∗ s π ∗ s = π ◦ . We write Π ∗ for { π ∗ s : s ∈ S } , co Π ∗ for the convex hull of Π ∗ , and ∆ ∗ for the set of all probabilitydistributions over Π ∗ . The corresponding optimal experiment is denoted σ ∗ . Theorem 1.
There exists a truthful equilibrium of the cheap-talk game, where bothexperts obtain their commitment value cav u ( π ◦ ) . Before proving Theorem 1, we explain our result’s logic with the help of a simpleexample. There are two states, ω and ω , and four actions, a , a L , a R and a . Thepreferences are depicted in Figure 1. Throughout the example, probabilities refer tothe probability of ω . Assume that π ◦ = 0 . . v ( a, π ) π a a L a R a . . . (a) DM’s Preferences. This figure depicts v ( a, π ) as a function of π = Pr[ ω = ω ] . Action a is op-timal for the DM for π ∈ [0 , . , action a L is op-timal for π ∈ [0 . , . , a R is optimal for [0 . , , ,and a is optimal for [0 . , . u ( a, π ) π a . . a L a R α p a . (b) Expert’s Preferences. The black dashed linesdepict u ( a, π ) as a function of π . The solidblack lines depict u ( π ) , the solid blue line de-picts cav u ( π ) for π ∈ [0 , , . , and cav u ( π ) coincides with u ( π ) for π / ∈ [0 , , . . The reddashed line depicts the payoff to the mixed action α p ∈ ∆( { a L , a R } ) . The mixed action α p is theuniform punishment; it is a best response for theDM to belief π p = 0 . . Figure 1: Uniform Punishment and Cross-verification.7e first note that the optimal experiment σ ∗ consists in splitting the prior into theposteriors posteriors π ∗ s = 0 . and π ∗ s = 0 . . We also note that u ( a , π s ) < u ( a , π s ) and u ( a , π s ) < u ( a , π s ) , that is, the experts have an incentive to mis-report therealized signals. Thus, if there was a single expert, choosing the experiment σ ∗ andtruthfully reporting the realized signal would not be an equilibrium. More generally,no equilibrium would give the expert his commitment value.Matters are different if the decision-maker chooses to consult another expert. Tosee this, suppose that the two experts choose the experiment σ ∗ and truthfully reportthe outcome of the experiment. The decision-maker then holds belief 0.3 (resp., 0.6)and plays action a (resp., a ) after observing two matching messages equal to s (resp., s ). Off the equilibrium path, i.e., when the decision-maker observes two contradictorymessages, assume that he holds belief π p = 0 . and plays action α p ∈ ∆( { a L , a R } ) = BR (0 . .The key observation to make is that the mixed strategy α p ∈ BR (0 . is a uniformpunishment , that is, u ( α p , π s ) < u ( a , π s ) and u ( α p , π s ) < u ( a , π s ) . (See Figure1.) In words, regardless of the realized signal, an expert is punished for deviatingfrom truth-telling. All the decision-maker needs to know is that a deviation has oc-curred, and the presence of the second expert indeed guarantees that deviations aredetected. The experts thus benefit from the decision-maker cross-verifying their infor-mation. (Naturally, there are other equilibria, where the decision-maker benefits fromcross-verification. See the next section.)We conclude with two additional remarks. First, if the experts choose the perfectlyinformative experiment, truthful reporting does not constitute an equilibrium. This isbecause the actions that are best for the decision-maker at beliefs π s = 0 and π s = 1 are the worst for the experts at those beliefs. Second, for any two experiments σ and σ , there is an equilibrium, where experts 1 and 2 choose experiments σ and σ ,respectively, and a babbling equilibrium of the ensuing sub-game is played.We now turn to the proof of Theorem 1. The proof rests on three essential properties.First, if the two experts choose the same experiment, their signals’ realizations are perfectly correlated . This is because they observe the same outcome. Second, if thetwo experts choose the same experiment, the decision-maker detects any deviation fromtruth-telling. This is because the decision-maker receives contradicting messages afterany deviation. However, he cannot identify the deviator and, thus, cannot infer the truesignal’s realization. Therefore, to deter deviations, the decision-maker must be able topunish the two experts simultaneously. The third property is the existence of such a uniform punishment whenever the experiment is expert optimal. The following lemma We have λ ∗ s = λ ∗ s = 1 / . The experiment is given by: σ ∗ ( s | ω ) = 0 . and σ ∗ ( s | ω ) = 0 . . Lemma 1 (Uniform punishment) . Let ( λ ∗ s , π ∗ s ) s ∈ S be an optimal splitting. There exist π p ∈ co Π ∗ and α p ∈ BR ( π p ) such that u ( α p , π ∗ s ) ≤ u ( π ∗ s ) for all π ∗ s ∈ Π ∗ . Lemma 1 is our main technical contribution. We postpone its proof to the end ofthis section and now show how to construct an equilibrium of the cheap-talk game witha payoff of cav u ( π ◦ ) to the experts. Proof of Theorem 1.
Let ( λ ∗ s , π ∗ s ) s ∈ S be an optimal splitting inducing the payoff cav u ( π ◦ ) .Let σ ∗ be the optimal experiment associated with that splitting. Recall that Π ∗ := { π ∗ s : s ∈ S } . From Lemma 1, there exist π p ∈ co Π ∗ and α p ∈ BR ( π p ) such that for all π ∗ s ∈ Π ∗ , u ( α p , π ∗ s ) − u ( π ∗ s ) ≤ .We construct a truthful equilibrium as follows. The experts choose the optimalexperiment σ ∗ and truthfully report the realized signal. Following the choice of σ ∗ ,the decision-maker chooses α ∈ BR ( π s ) , with u ( α, π s ) = u ( π s ) , when he observestwo identical messages equal to s . Alternatively, if the decision-maker receives twoconflicting messages, he chooses α p (sustained by the belief π p ). Finally, following thechoice of any other statistical experiment, an equilibrium of the continuation game,which exists by finiteness, is played. It is routine to check that we indeed have anequilibrium.We now offer a series of remarks. Remark . We have assumed that the two experts share the same preferences. If thepreferences of one expert, say the second expert, are a convex combination of thepreferences of the first expert and the decision maker, i.e., βu ( a, ω ) + (1 − β ) v ( a, ω ) for some β ∈ [0 , , then we can still construct a truthful equilibrium, where the firstexpert continues to obtain his commitment value. To see this, let α s be such that u ( α s , π ∗ s ) = u ( π ∗ s ) and note that v ( α p , π ∗ s ) ≤ v ( α s , π ∗ s ) = max ˜ α v ( ˜ α, π ∗ s ) for all s , where α p is the punishment, which exists by Lemma 1. This implies that βu ( α, π ∗ s ) + (1 − β ) v ( α, π ∗ s ) ≤ βu ( α s , π ∗ s ) + (1 − β ) v ( α s , π ∗ s ) for all π ∗ s ∈ Π ∗ , i.e., α p is also a uniformpunishment for the second expert. We illustrate this remark with a simple example.As in Crawford and Sobel (1982), assume that the decision-maker obtains the payoff − ( α − ω ) , when he chooses α ∈ [0 , and the state is ω . The payoff of the two expertsare − ( α − ω − b ) and − ( α − ω − βb ) , with β ∈ [0 , and b > , respectively. The secondexpert is (weakly) less biased than the first expert. Observe that, up to a constant, the Throughout, we have assumed that the decision-maker has a finite set of actions. Our resultsextend to the set A being a non-empty compact subset of R and concave continuous payoff functions.The proof of Lemma 1 only requires a slight modification: we need to invoke duality for convexprogramming rather than for linear programming. − (cid:2) (1 − β )( α − ω ) + β ( α − ω − b ) (cid:3) = − ( α − ω − βb ) − b β (1 − β ) . Therefore, there exists an equilibrium, which gives the most biased expert his commit-ment value. Remark . We have assumed that the choice of experiments is publicly observed. Ifthe decision-maker does not observe the experiments chosen by the two experts, butif the experts observe each other’s experiment choice, then again there is a truthfulequilibrium, where the optimal experiment σ ∗ is chosen as in Theorem 1. In thisequilibrium, the play on the equilibrium path unfolds as in Theorem 1. If any expertdeviates and chooses another experiment σ (cid:54) = σ ∗ , then the two experts send the message m , where m is a message that is never sent on the equilibrium path. If the decision-maker observes two messages that do not match or observes a message equal to m from either of the two experts, then he best responds to her belief π p ∈ co Π ∗ and playsaction α p . Remark . We have assumed that the two experts choose experiments simultaneously.This assumption is again not required for our result. Suppose instead that one expert,say the first expert, chooses an experiment σ : Ω → ∆( S × S ) , with expert i privatelyobserving the signal’s realization s i . As before, after observing their signals, the expertssend messages to the decision-maker, who then chooses an action. Yet again, we havea truthful equilibrium, where the equilibrium payoff of the two experts is cav u ( π ◦ ) asin Theorem 1. In this equilibrium, the first expert chooses the optimal experiment andperfectly correlates the second expert’s signal with his own. Remark . We have assumed weak perfect Bayesian equilibrium as our solution con-cept. If we restrict attention to a finite set of experiments, which contains σ ∗ , then wecan strengthen the solution concept to sequential equilibrium. We only need a slightmodification of Lemma 1 to guarantee that the decision-maker believes that the re-alized signal is either s or s (cid:48) after observing a report ( s, s (cid:48) ) . We need to prove theexistence of a belief π s,s (cid:48) ∈ ∆( { π ∗ s , π ∗ s (cid:48) } ) and a mixed action α s,s (cid:48) ∈ BR ( π s,s (cid:48) ) such that u ( α s,s (cid:48) , π ∗ ˜ s ) − u ( π ∗ ˜ s ) ≤ for all ˜ s ∈ { s, s (cid:48) } . A minor adaptation of the proof of Lemma1 guarantees this result. Proof of Lemma 1.
We first establish two intermediate claims, then we use these two In the quadratic example, the payoff u ( π ) to the most biased expert is − ( V π [ ω ] + b ) , with V π [ ω ] the variance of ω with respect to the distribution π . Since the variance of a real-valued random variableis concave in its distribution, full information disclosure attains the commitment value. ( λ ∗ s , π ∗ s ) s ∈ S be an optimal splitting. Recall that Π ∗ := { π ∗ s : s ∈ S } and ∆ ∗ is the set of all probability distributions over Π ∗ . Claim 1:
For any λ ∈ ∆ ∗ , u ( (cid:80) s λ s π ∗ s ) ≤ (cid:80) s λ s u ( π ∗ s ) . Proof of Claim 1:
Consider the convex hull of the graph of u , i.e., co { ( π, r ) ∈ ∆(Ω) × R : r = u ( π ) } . By construction, the point ( π ◦ , cav u ( π ◦ )) = ( (cid:80) s λ ∗ s π ∗ s , (cid:80) s λ ∗ s u ( π ∗ s )) ison the boundary of the convex hull. From the supporting hyperplane theorem, thereexists a hyperplane h ∈ R | Ω | × R supporting co { ( π, r ) ∈ ∆(Ω) × R : r = u ( π ) } at ( π ◦ , cav u ( π ◦ )) such that the graph of u lies below h . For all s ∈ S , the point ( π ∗ s , u ( π ∗ s )) also lies on the hyperplane h . Consequently, the point ( (cid:80) s λ s π ∗ s , (cid:80) s λ s u ( π ∗ s )) , must alsolies on the hyperplane. Therefore, u ( (cid:80) s λ s π ∗ s ) ≤ (cid:80) s λ s u ( π ∗ s ) as required. (cid:4) Claim 2:
Choose any non-empty subset B ⊂ A and ε > . If max s ∈ S [ u ( α, π ∗ s ) − u ( π ∗ s )] ≥ ε for each α ∈ ∆( B ) , then there exists ˆ λ ∈ ∆ ∗ such that min α ∈ ∆( B ) u ( α, (cid:80) s ˆ λ s π ∗ s ) ≥ (cid:80) s ˆ λ s u ( π ∗ s ) + ε . Proof of Claim 2:
The claim follows from duality. Consider the following linearprogram: min ( x,α ) ∈ R × ∆( B ) x subject to: for all s ∈ S , (cid:88) a ∈ B α ( a ) [ u ( a, π ∗ s ) − u ( π ∗ s )] ≤ x. This minimization problem has a solution ˆ x . Our hypothesis implies that ˆ x ≥ ε . Thedual program is given by max ( y,λ ) ∈ R × ∆(Π ∗ ) y subject to: for all a ∈ B , (cid:88) s ∈ S λ s [ u ( a, π ∗ s ) − u ( π ∗ s )] ≥ y. Since the primal linear program has a solution, the dual program also has a solution (ˆ y, ˆ λ ) . No duality gap further implies that ˆ y = ˆ x ≥ ε . (See Section 4.2 of Luenbergerand Ye, 2008.) Therefore, for all a ∈ B , u ( a, (cid:88) s ˆ λ s π ∗ s ) = (cid:88) s ∈ S ˆ λ s u ( a, π ∗ s ) ≥ ε + (cid:88) s ∈ S ˆ λ s u ( π ∗ s ) Hence, u ( α, (cid:80) s ˆ λ s π ∗ s ) ≥ (cid:80) s ˆ λ s u ( π ∗ s ) + ε for all α ∈ ∆( B ) , as required. (cid:4)
11e now use Claims 1 and 2 to complete the proof. Denote by br ( π ) ⊂ A thedecision-maker’s set of pure best-replies to belief π .By contradiction, assume that there does not exist π p ∈ co Π ∗ and α p ∈ BR ( π p ) such that u ( α p , π ∗ s ) − u ( π ∗ s ) ≤ for all π ∗ s ∈ Π ∗ . Note that π ∈ co Π ∗ if and only if π = (cid:80) s λ s π ∗ s for some λ ∈ ∆ ∗ . Hence, our contradiction hypothesis can be restated asfollows: for each λ ∈ ∆ ∗ , there exists ε ( λ ) > such that max s ∈ S [ u ( α, π ∗ s ) − u ( π ∗ s )] ≥ ε ( λ ) for each α ∈ ∆( br ( (cid:80) s λ s π ∗ s )) = BR ( (cid:80) s λ s π ∗ s ) . Let ε := min λ ∈ ∆ ∗ ε ( λ ) . Note that ε > because ε ( λ ) depends only on the finite set br ( (cid:80) s λ s π ∗ s ) , and there are finitely manysuch subsets of A .Define the correspondence F : ∆ ∗ → ∆ ∗ , with F ( λ ) := (cid:110) λ (cid:48) ∈ ∆ ∗ : min α ∈ BR ( (cid:80) s λ s π ∗ s ) (cid:88) s λ (cid:48) s (cid:16) u ( α, π ∗ s ) − u ( π ∗ s ) (cid:17) ≥ ε (cid:111) . We can readily check that this correspondence is convex and compact valued. We arguebelow that it is non-empty valued and lower hemi-continuous. Hence, the correspon-dence has a fixed point λ ∈ F ( λ ) by Theorem 15.4 in Border (1990). Noting that (cid:80) s λ ( s ) u ( α, π ∗ s ) = u ( α, (cid:80) s λ ( s ) π ∗ s ) , we find min α ∈ BR ( (cid:80) s λ ( s ) π ∗ s ) (cid:16) u ( α, (cid:88) s λ ( s ) π ∗ s ) − (cid:88) s λ ( s ) u ( π ∗ s ) (cid:17) ≥ ε for λ ∈ ∆ ∗ contradicting Claim 1 and establishing the result.We now show that the correspondence is non-empty valued. Pick any λ ∈ ∆ ∗ .The contradiction hypothesis states that max s ∈ S [ u ( α, π ∗ s ) − u ( π ∗ s )] ≥ ε for each α ∈ BR ( (cid:80) s λ s π ∗ s ) . Claim 2 then implies that there exists ˆ λ ∈ ∆ ∗ such that min α ∈ BR ( (cid:80) s λ s π ∗ s ) (cid:88) s ˆ λ s ( u ( α, π ∗ s ) − u ( π ∗ s )) ≥ ε, i.e., the correspondence is non-empty valued.Finally, we prove lower hemi-continuity. Pick an open set O ⊆ ∆ ∗ such that F ( λ ) ∩ O (cid:54) = ∅ . Since BR is upper hemi-continuous (by the maximum principle) and A is finite,there exists a neighborhood O (cid:48) of λ such that BR ( (cid:80) s λ (cid:48) s π ∗ s ) ⊆ BR ( (cid:80) s λ s π ∗ s ) for all λ (cid:48) ∈ O (cid:48) . Therefore, for all λ (cid:48) ∈ O (cid:48) , min α ∈ BR ( (cid:80) s λ (cid:48) s π ∗ s ) (cid:88) s λ (cid:48)(cid:48) s (cid:16) u ( α, π ∗ s ) − u ( π ∗ s ) (cid:17) ≥ min α ∈ BR ( (cid:80) s λ s π ∗ s ) (cid:88) s λ (cid:48)(cid:48) s (cid:16) u ( α, π ∗ s ) − u ( π ∗ s ) (cid:17) ≥ ε for any λ (cid:48)(cid:48) ∈ F ( λ ) ∩ O because BR ( (cid:80) s λ (cid:48) s π ∗ s ) ⊆ BR ( (cid:80) s λ s π ∗ s ) , i.e., λ (cid:48)(cid:48) ∈ F ( λ (cid:48) ) . Hence, F ( λ (cid:48) ) ∩ O (cid:54) = ∅ for all λ (cid:48) ∈ O (cid:48) , which proves the lower hemi-continuity of F (Definition121.3 in Border (1990)). The previous section showed that the experts benefit from the decision-maker cross-verifying their information. This section explores whether the decision-maker can alsobenefit from cross-verification.We begin with some definitions. Fix a cheap-talk game Γ( π ◦ , u, v ) . We say that theexperts benefit from persuasion if cav u ( π ◦ ) > u ( π ◦ ) . Similarly, we say that the decision-maker benefits from cross-verification if there exists an equilibrium of the cheap-talkgame, where the decision-maker’s payoff exceeds the ex-ante payoff max a ∈ A v ( a, π ◦ ) .Notice that if the decision-maker benefits from cross-verification, the experts must revealsome information to the decision-maker.Define ˆ A := { a ∈ A : ∃ π ∈ ∆(Ω) s.t. a ∈ BR ( π ) } and v ( π ) := max α v ( α, π ) for all π ∈ ∆(Ω) . We say that there are no redundant actions for the decision-maker if for allnon-empty B ⊂ ˆ A , there exists π ∈ ∆(Ω) such that v ( π ) > max a ∈ B v ( a, π ) . There areno redundant actions for the experts if there are no two distinct actions a and a (cid:48) suchthat u ( a, ω ) = u ( a (cid:48) , ω ) for all ω ∈ Ω . Remark . The conditions of non-redundancy are generic. Moreover, the condition of noredundant actions for the decision-maker does not preclude strictly dominated actions.Two important implications of that condition are as follows: (i) the set BR − ( a ) := { π ∈ ∆(Ω) : v ( a, π ) = v ( π ) } has full dimension (as a subset of the simplex of dimension | Ω | − ), and (ii) no action other than a is optimal in the relative interior of BR − ( a ) ,denoted by int BR − ( a ) .Theorem 1 showed that the experts benefit from cross-verification in games wherethey benefit from persuasion. The following proposition further establishes that thedecision-maker also benefits from cross-verification in such games. Proposition 1.
Assume that there are no redundant actions for the decision-maker inthe game Γ( π ◦ , u, v ) . At almost all priors π ◦ , if the experts benefit from persuasion,then the decision-maker benefits from cross-verification. We first illustrate the logic of the proposition with the help of a simple example.There are two states, ω and ω , and three actions, a , a and a p . Throughout theexample, probabilities refer to the probability of ω . The prior is π ◦ = 0 . . Thepayoffs are illustrated in Figure 2. The optimal experiment consists in splitting theprior into the posteriors π ∗ s = 0 . and π ∗ s = 0 . . The experts strictly benefit frompersuasion. From Theorem 1, there exists a truthful equilibrium, where an expert’s13 ( a, π ) π a a p a . . (a) DM’s Preferences. Action a is optimal forthe DM for π ∈ [0 , . , action a p is optimal for π ∈ [0 . , . , and a is optimal for [0 . , . u ( a, π ) π . . . a a p . a (b) Expert’s Preferences. The solid black linesdepict u ( π ) , the solid blue line depicts cav u ( π ) for π ∈ [0 , , . , and cav u ( π ) coincides with u ( π ) for π / ∈ [0 , , . . Figure 2: DM Benefits from Cross-verification.payoff is his commitment value. Action a p is the uniform punishment sustaining theequilibrium. Note that a p is uniquely optimal at the prior and also optimal at the twoposteriors. Consequently, the decision-maker does not benefit from cross-verification atthe equilibrium. Yet, we can construct another equilibrium, where the decision-makerbenefits from cross-verification. To see this, consider the splitting of the prior into π s = 0 . and π s = 0 . . At π s (resp., π s ), the decision-maker plays a (resp., a ). Tosustain this splitting as an equilibrium, the decision-maker punishes the experts with a p . The decision-maker strictly benefits from this more informative experiment.We prove that the logic of the example generalizes to almost all priors. That is,for all priors, but for a subset with Lebesgue measure zero, we can always constructan equilibrium of the cheap-talk game, where the decision-maker benefits from cross-verification if the experts benefit from persuasion. More precisely, we prove that theproposition holds at all interior priors, where the decision-maker has at most two best-replies, a generic condition.The need for non-redundancy is clear. If the decision-maker is indifferent between allhis actions, the decision-maker cannot benefit from cross-verification, while the expertscan benefit from persuasion. We now turn to the proof. Proof of Proposition 1.
Consider an optimal splitting ( λ ∗ s , π ∗ s ) s ∈ S of π ◦ , which inducesthe value cav u ( π ◦ ) , where cav u ( π ◦ ) > u ( π ◦ ) . Without loss of generality, assume that λ ∗ s > for all s ∈ S . Let v ( π ) := max α v ( α, π ) for all π ∈ ∆(Ω) .If the decision-maker benefits from the statistical experiment, there is nothing toprove. So, assume that the decision-maker does not benefit from the statistical ex-periment, i.e., (cid:80) s λ ∗ s v ( π ∗ s ) = v ( π ◦ ) . We construct another equilibrium at which the14ecision-maker benefits from cross-verification.We first claim that for all a ∈ BR ( π ◦ ) , a ∈ BR ( π ) for all π ∈ co { π ∗ s : s ∈ S } . Tosee this, consider any a ∈ BR ( π ◦ ) and observe that (cid:88) s λ ∗ s v ( π ∗ s ) = v ( π ◦ ) = v ( a, π ◦ ) = v (cid:32) a, (cid:88) s λ ∗ s π ∗ s (cid:33) = (cid:88) s λ ∗ s v ( a, π ∗ s ) . It follows that (cid:88) s λ ∗ s (cid:124)(cid:123)(cid:122)(cid:125) > ( v ( π ∗ s ) − v ( a, π ∗ s ) (cid:124) (cid:123)(cid:122) (cid:125) ≥ ) = 0 If there exists s such that v ( π ∗ s ) > v ( a, π ∗ s ) , we have a contradiction. Hence, a ∈ BR ( π ∗ s ) for all s and, consequently, a ∈ BR ( π ) for all π ∈ co { π ∗ s : s ∈ S } .From the definition of u , we have that u ( a, π ∗ s ) ≤ u ( π ∗ s ) for all s , for all a ∈ BR ( π ◦ ) ,since BR ( π ◦ ) ⊆ BR ( π ∗ s ) for all s . We now argue that for all a ∈ BR ( π ◦ ) , there exists s a ∈ S such that u ( a, π ∗ s a ) < u ( π ∗ s a ) . Choose any a ∈ BR ( π ◦ ) . To the contrary, assumethat u ( a, π ∗ s ) = u ( π ∗ s ) for all s . We then havecav u ( π ◦ ) = (cid:88) s λ ∗ s u ( π ∗ s ) = (cid:88) s λ ∗ s u ( a, π ∗ s ) = u ( a, π ◦ ) ≤ u ( π ◦ ) ≤ cav u ( π ◦ ) , a contradiction with the expert benefiting from the experiment.To sum up, we have (i) BR ( π ◦ ) ⊆ BR ( π ) for all π ∈ co { π ∗ s : s ∈ S } , and (ii) foreach a ∈ BR ( π ◦ ) , there exists s a such that u ( a ∗ s a , π ∗ s a ) > u ( a, π ∗ s a ) with a ∗ s a ∈ BR ( π ∗ s a ) satisfying u ( a ∗ s a , π ∗ s a ) = u ( π ∗ s a ) .For each a ∈ BR ( π ◦ ) , consider the open ball O = { π ∈ ∆(Ω) : || π − π ∗ s a || < ε } such that u ( a, π ) < u ( a ∗ s a , π ) for all π in the open ball. Since u is continuous in π and u ( a, π ∗ s a ) < u ( a ∗ s a , π ∗ s a ) , such an open ball exists.We claim that O intersects the relative interior of BR − ( a ∗ s a ) . To see this, note that O ∩ BR − ( a ∗ s a ) (cid:54) = ∅ since π ∗ s a is an element of both O and BR − ( a ∗ s a ) . Moreover, itfollows from the non-redundancy of A that π ∗ s a is not in the relative interior of BR − ( a ∗ s a ) since any a ∈ BR ( π ◦ ) is also optimal at π ∗ s a . Since the relative interior of BR − ( a ∗ s a ) isnon-empty, there exists π ∗∗ in the relative interior such that the half-open line segment [ π ∗∗ , π ∗ s a ) is contained in the relative interior. (See Theorem 2.1.3 and Lemma 2.1.6 inHiriart-Urruty and Lemaréchal.) Therefore, there exists π a in the intersection of therelative interior of BR − ( a ∗ s a ) and O , i.e., such that u ( a, π a ) < u ( a ∗ s , π a ) = u ( π a ) . Notethat v ( a ∗ s a , π a ) > v ( a, π a ) since a ∗ s a is uniquely optimal at π . In other words, there is anelement of BR ( π ◦ ) , namely a , which is not an element of BR ( π a ) .15he last step consists in showing that there exists a ∈ BR ( π ◦ ) and π a ∈ BR − ( a ) such that the open segment ( π a , π a ) includes π ◦ . Indeed, if such an open segment exists,we have a splitting ( π a , π a ) of π ◦ such that u ( π a ) ≥ u ( a, π a ) , u ( π a ) = u ( a ∗ s a , π a ) >u ( a, π a ) . This splitting can be supported as a truthful equilibrium (with a as thepunishment at belief π ◦ ). Moreover, since v ( a ∗ s a , π a ) > v ( a, π a ) , the decision-makerstrictly benefits, the desired contradiction.Finally, suppose that π ◦ is in the interior of the simplex. If BR ( π ◦ ) = { a } , then π ◦ is in the relative interior of BR − ( a ) . Thus, we can trivially find a segment with therequired property.If BR ( π ◦ ) = { a, b } and s a = s b , then the same arguments apply, since the opensegment will intersect either BR − ( a ) or BR − ( b ) . If s a (cid:54) = s b , choose π s a such that b isuniquely optimal at π s a . Such π a exists since v ( b, π s a ) = max a (cid:48) ∈ BR ( π sa ) v ( a (cid:48) , π s a ) (if not s a = s b ). As before, the open segment intersects either BR − ( a ) or BR − ( b ) . However,it cannot be BR − ( b ) . If it were, b would be uniquely optimal at π s a and optimal at π ◦ and π s a , which is not possible since BR − ( b ) is convex.Since the set of interior priors with at most two best-replies is generic, the proof iscomplete.Proposition 1 does not generalize to all priors. For a counter-example, considerFigure 3. There are three states, ω , ω and ω , and two actions, a and b . The action a (resp., b ) is optimal in the left triangle marked “ a ” (resp., in the right triangle marked“ b ”). At the prior π ◦ , the action a is the unique best-reply of the decision-maker.Assume that u ( b, ω ) > u ( a, ω ) . Thus, if the experts truthfully reveal the state, theybenefit from persuasion, while the decision-maker does not. ω ω ω π ◦ a b Figure 3: A counter-example If there are two states, Proposition 1 generalizes to all interior priors. In this case, non-redundancyof the decision-maker’s payoff implies that the decision-maker has at most two best-replies at each belief,where we know that Proposition 1 holds. In general, however, we do not know whether the propositiongeneralizes to all interior priors.
Proposition 2.
Assume that there are no redundant actions for the experts and thedecision-maker in the game Γ( π ◦ , u, v ) . If u is a concave function, then the decision-maker does not benefit from cross-verification. That is, in all equilibria of Γ( π ◦ , u, v ) ,the decision-maker’s payoff is v ( π ◦ ) . To understand Proposition 2, assume that the experts and the decision-maker haveopposing preferences, that is, u = − v . In this case, what is best for the decision-makeris worst for the experts, and therefore, v = − u . Moreover, if either of the experts, sayexpert 1, chooses a uninformative experiment, an expert’s payoff is u ( π ◦ ) in all equilibriaof the ensuing game. This is because if expert 2’s experiment produces two signals s and s (cid:48) such that the set of best-replies at π s differs from the set of best-replies at s (cid:48) , thenexpert 2 has an incentive to misreport one of the two signals, if not both. The expertscannot credibly communicate any information. Therefore, no expert can obtain lessthan u ( π ◦ ) in equilibrium. Experts cannot obtain more than u ( π ◦ ) either. Indeed, forevery on-path posterior π , the decision-maker chooses a best-reply in equilibrium, hencean expert’s payoff is minimized at π , i.e., an expert’s payoff is u min ( π ) := min a u ( a, π ) .The result then follows from the concavity of u min . Proposition 2 does not requireopposing preferences; the logic outlined above extends to all games, where u is concave.To further illustrate Proposition 2, consider Figure 4. For the decision-maker tobenefit from cross-verification, the experts would need to choose an experiment, whichinduces the decision-maker to play different actions after receiving different signals.However, we cannot sustain such a choice as an equilibrium. An expert would alwayshave an incentive to misreport the realized signal. This is because any action otherthan the one chosen by the decision-maker improves an expert’s payoff, i.e., there is nouniform punishment.The need for the non-redundancy of the experts’ actions is again clear. If theexperts are totally indifferent, they cannot benefit from persuasion but can provide thedecision-maker with perfectly informative signals. It remains to prove Proposition 2.We do so through a series of lemmata. The following lemma shows that the conflictof interest between the experts and the decision-maker is maximal when the expertscannot benefit from persuasion; that is, the decision-maker’s best-replies at belief π minimizes the experts’ expected payoff. Lemma 2.
For every π ∈ ∆(Ω) , BR ( π ) = arg min α (cid:48) ∈ ∆( A ) u ( α (cid:48) , π ) . ( a, π ) π a a . (a) DM’s Preferences. Action a is optimal for theDM for π ∈ [0 , . and a is optimal for [0 . , . u ( a, π ) π . a a (b) Expert’s Preferences. The solid black linesdepict the concave function u ( π ) . Figure 4: No Benefit from Cross-verification or Persuasion.
Proof of Lemma 2.
We start by proving the following claim.
Claim 1. a ∈ A and π ∈ int BR − ( a ) implies { a } = arg min α (cid:48) ∈ ∆( A ) u ( α (cid:48) , π ) .Proof of Claim 1. Fix a ∈ A and π ∈ int BR − ( a ) . We first argue that there does notexist a (cid:48) ∈ A such that such that u ( a (cid:48) , π ) < u ( a, π ) . To the contrary, suppose such a (cid:48) exists. Pick an arbitrary π (cid:48) ∈ int BR − ( a (cid:48) ) . There exists π (cid:48)(cid:48) ∈ int BR − ( a (cid:48) ) and λ ∈ (0 , such that π (cid:48)(cid:48) = λπ + (1 − λ ) π (cid:48) . We obtain u ( a (cid:48) , π (cid:48)(cid:48) ) = u ( π (cid:48)(cid:48) ) ≥ λu ( π ) + (1 − λ ) u ( π (cid:48) )= λu ( a, π ) + (1 − λ ) u ( a (cid:48) , π (cid:48) ) > λu ( a (cid:48) , π ) + (1 − λ ) u ( a (cid:48) , π (cid:48) )= u ( a (cid:48) , π (cid:48)(cid:48) ) , where the first inequality follows from the concavity of u , the desired contradiction.We now argue that there does not exist a (cid:48) ∈ A such that u ( a (cid:48) , π ) = u ( a, π ) . From theabove, for all π n ∈ int BR − ( a ) , u ( a (cid:48) , π n ) ≥ u ( a, π n ) . Consider any convex combination ( λ n , π n ) n satisfying (cid:80) n λ n π n = π , π n ∈ int BR − ( a ) for all n , λ n > for all n , and the π n being linearly independent. Such a convex combination exists since BR − ( a ) hasfull dimension. If u ( a (cid:48) , π ) = u ( a, π ) , then u ( a (cid:48) , π ) = (cid:88) n λ n u ( a (cid:48) , π n ) ≥ (cid:88) n λ n u ( a, π n ) = u ( a, π ) = u ( a (cid:48) , π ) , i.e., u ( a (cid:48) , π n ) = u ( a, π n ) for all n , a contradiction with the condition of no redundantactions for the experts. Therefore, for all a (cid:48) (cid:54) = a , u ( a (cid:48) , π ) > u ( a, π ) , which completes18he proof of the claim.From Claim 1, the statement is true for all π such that π ∈ int BR − ( a ) for some a ∈ A . Since BR and arg min α (cid:48) ∈ A u ( α (cid:48) , π ) are upper hemi-continuous correspondences,which coincide almost everywhere (in Lebesque measure), they coincide everywhere.We now derive an immediate implication of Lemma 2. We first introduce someadditional notation. Recall that following the choice of experiments ( σ , σ ) , we havea proper sub-game. We are interested in analyzing the play in these sub-games. Toease notation, we drop the dependence on ( σ , σ ) and write π ( m , m ) ∈ ∆(Ω) forthe decision-maker’s belief after observing the messages ( m , m ) . Similarly, we write α ( m , m ) ∈ ∆( A ) for the receiver’s equilibrium reply. Finally, let P denote the prob-ability distribution over signals, messages and actions induced by the prior and thestrategy profile, conditional on the experiments ( σ , σ ) . At an equilibrium, sequentialrationality requires the decision-maker to choose a best-reply to his belief. Fix an equi-librium, an on-path profile of messages ( m , m ) , and its associated belief π ( m , m ) .Since all best-replies of the decision-maker to π ( m , m ) minimize the experts’ pay-offs, no expert must be able to induce the decision-maker to choose an action outside BR ( π ( m , m )) by changing his message to m (cid:48) . Lemma 3. If P ( m i , m j ) > , then for all m (cid:48) i , α ( m (cid:48) i , m j ) ∈ BR ( π ( m i , m j )) .Proof of Lemma 3. Without loss of generality, let i = 1 , j = 2 . The proof is by contra-diction. Assume that there exists m , m (cid:48) , m (cid:48) such that α ( m (cid:48) , m (cid:48) ) / ∈ BR ( π ( m , m (cid:48) )) .From Lemma 2, u ( α ( m (cid:48) , m (cid:48) ) , π ( m , m (cid:48) )) > u ( α ( m , m (cid:48) ) , π ( m , m (cid:48) )) . The equilib-rium payoff to expert 1 is (cid:88) ( ˜ m , ˜ m ) P ( ˜ m , ˜ m ) u ( α ( ˜ m , ˜ m ) , π ( ˜ m , ˜ m )) . If expert 1 deviates by always sending the message m (cid:48) , his expected payoff is: (cid:88) ( ˜ m , ˜ m ) P ( ˜ m , ˜ m ) u ( α ( m (cid:48) , ˜ m ) , π ( ˜ m , ˜ m )) . We now argue that the deviation is profitable, the required contradiction.From Lemma 2, we have that u ( α ( ˜ m , ˜ m ) , π ( ˜ m , ˜ m )) ≤ u ( α ( m (cid:48) , ˜ m ) , π ( ˜ m , ˜ m )) for all ( ˜ m , ˜ m ) . Moreover, there exists ( m , m ) such that the inequality is strict and P ( m , m ) > . Thus, the deviation is profitable.The next lemma shows that if any expert chooses an uninformative experiment,then the experts’ and the decision-maker’s payoff in the ensuing equilibrium is equal totheir payoff at their prior belief. 19 emma 4. Let ( σ , σ ) be a profile of experiments. If either σ or σ is an uninforma-tive experiment, then the experts’ equilibrium payoff is u ( π ◦ ) and the decision-maker’sequilibrium payoff is v ( π ◦ ) in the ensuing sub-game.Proof of Lemma 4. Without loss of generality, assume that σ is uninformative. Sincethe experiments are observed by the decision-maker, this implies that π ( m , m ) isindependent of m . (Recall that we require the beliefs to be consistent with the exper-iments.) To ease the notation, we drop the dependence on m .Together with Lemma 3, this implies that for all ( m , m ) such that P ( m , m ) > , α ( m (cid:48) , m ) ∈ BR ( π ( m )) for all m (cid:48) . That is, α ( m (cid:48) , m ) is a best-reply to all posteriorbeliefs π ( m ) . Note that since P ( m , m ) > , the message m has strictly positiveprobability. It follows that α ( m (cid:48) , m ) is a best-reply to π ◦ (as the prior is a convexcombinations of the posteriors). Since it is true for all ( m (cid:48) , m ) , the decision-makerpayoff is v ( π ◦ ) .Finally, since Lemma 2 states that the experts are indifferent among all best-repliesof the decision-makers, an expert’s payoff is u ( π ◦ ) .We now conclude the proof. Lemma 5.
In any equilibrium of the cheap-talk game, the experts’ payoff is u ( π ◦ ) , andthe decision-maker’s payoff is v ( π ◦ ) .Proof of Lemma 5. Fix any equilibrium of the cheap-talk game. From Lemma 4, thepayoff to any expert must at least be u ( π ◦ ) . We now argue that it cannot be higher.If ( σ ∗ , σ ∗ ) are the experiments chosen at the first stage, then in the ensuing sub-game,an expert’s payoff is: (cid:88) ( m ,m ) P ( m , m ) u ( α ( m , m ) , π ( m , m )) = (cid:88) ( m ,m ) P ( m , m ) min a ∈ A u ( a, π ( m , m )) ≤ min a ∈ A u a, (cid:88) ( m ,m ) P ( m , m ) π ( m , m ) = min a ∈ A u ( a, π ◦ ) = ¯ u ( π ◦ ) . (Recall that P , α and π depend on ( σ ∗ , σ ∗ ) , but to ease notation, we do not explicitlywrite the dependence.)Finally, we argue that the decision-maker cannot get a payoff higher than v ( π ◦ ) either. Indeed, for the decision-maker to obtain a higher payoff, there must exist anaction a ∈ BR ( π ◦ ) and a message profile ( m , m ) such that P ( m , m ) > and a / ∈ BR ( π ( m , m )) . This, however, would imply that an expert’s equilibrium payoff is20trictly less than u ( a, π ◦ ) , a contradiction with an expert’s equilibrium payoff beingequal to ¯ u ( π ◦ ) = min a (cid:48) ∈ A u ( a (cid:48) , π ◦ ) .The latter assertion follows from Lemma 3, which states that u ( a, π ( m , m )) >u ( α ( m , m ) , π ( m , m )) and u ( a, π ( m (cid:48) , m (cid:48) )) ≥ u ( α ( m (cid:48) , m (cid:48) ) , π ( m (cid:48) , m (cid:48) )) for all pairs ofmessages ( m (cid:48) , m (cid:48) ) with P ( m (cid:48) , m (cid:48) ) > . In this paper we studied the role that multiple experts can play in cross-verification.Clearly, cross-verification is not the sole reason for soliciting advice from multiple ex-perts. Consulting a diverse set of experts with different opinions, specializations, pref-erences can provide a decision-maker with insights about the merits of different aspectsof an issue. In fact, a decision-maker may be able to perfectly learn a multidimensionalstate by consulting experts about different dimensions. However, consulting expertsthat have information about different dimensions of a decision reduces the scope forcross-verification since cross-verification is most effective when experts’ information ishighly correlated. Moreover, as we demonstrated in this paper, the experts have an in-centive to facilitate cross-verification by acquiring correlated information. This pointsto an interesting tension that can inform future research on committee design.
References
Au, P. H. and K. Kawai (2020): “Competitive Information Disclosure by MultipleSenders,”
Games and Economic Behavior , 119, 56–78.
Aumann, R. J. and M. B. Maschler (1995):
Repeated Games with IncompleteInformation , Cambridge, MA: MIT Press.
Battaglini, M. (2002): “Multiple Referrals and Multidimensional Cheap Talk,”
Econometrica , 70, 1379–1401.
Border, K. C. (1990): “Fixed Point Theorems with Applications to Economics andGame Theory,”
Cambridge Books . Crawford, V. P. and J. Sobel (1982): “Strategic Information Transmission,”
Econometrica , 1431–1451.
Gentzkow, M. and E. Kamenica (2016): “Competition in Persuasion,”
The Reviewof Economic Studies , 84, 300–322. 21—— (2017): “Bayesian Persuasion with Multiple Senders and Rich Signal Spaces,”
Games and Economic Behavior , 104, 411–429.
Gilligan, T. W. and K. Krehbiel (1989): “Asymmetric Information and LegislativeRules with a Heterogeneous Committee,”
American Journal of Political Science , 33,459–490.
Green, J. and N. Stokey (1978):
Two Representations of Information Structuresand their Comparisons , Institute for Mathematical Studies in the Social Sciences.
Kamenica, E. (2019): “Bayesian Persuasion and Information Design,”
Annual Reviewof Economics , 11, 249–272.
Kamenica, E. and M. Gentzkow (2011): “Bayesian Persuasion,”
American Eco-nomic Review , 101, 2590–2615.
Koessler, F., M. Laclau, and T. Tomala (2018): “Interactive Information De-sign,”
HEC Paris Research Paper No. ECO/SCD-2018-1260 . Krishna, V. and J. Morgan (2001a): “Asymmetric Information and LegislativeRules: Some Amendments,”
American Political Science Review , 435–452.——— (2001b): “A Model of Expertise,”
The Quarterly Journal of Economics , 116,747–775.
Li, F. and P. Norman (2018): “On Bayesian Persuasion with Multiple Senders,”
Economics Letters , 170, 66–70.——— (2020): “Sequential Persuasion,”
Theoretical Economics , forthcoming.
Lipnowski, E. (2020): “Equivalence of Cheap Talk and Bayesian Persuasion in a FiniteContinuous Model,” .
Luenberger, D. G. and Y. Ye (2008):
Linear and Nonlinear Programming ,Springer.
Lyu, Q. (2020): “Information Design in Cheap Talk,”
Available at SSRN 3579297 . Sobel, J. (2013): “Giving and Receiving Advice,”
Advances in Economics and Econo-metrics , 1, 305–341.
Wolinsky, A. (2002): “Eliciting Information from Multiple Experts,”