aa r X i v : . [ ec on . T H ] F e b D R A F T A Practical Approach to Social Learning.
Amir Ban and Moran Koren Weizmann Institute of Science Stanford UniversityFebruary 26, 2020
Abstract
Models of social learning feature either binary signals or abstract signal struc-tures often deprived of micro-foundations. Both models are limited when analyzinginterim results or performing empirical analysis. We present a method of generatingsignal structures which are richer than the binary model, yet are tractable enoughto perform simulations and empirical analysis. We demonstrate the method’s us-ability by revisiting two classical papers: (1) we discuss the economic significanceof unbounded signals [12]; (2) we use experimental data from [2] to perform econo-metric analysis. Additionally, we provide a necessary and sufficient condition forthe occurrence of action cascades. [email protected] and [email protected] supported by the Fulbright Postdoctoral Fellowship 2019/2020 R A F T The literature of social learning focuses on the question “Why do people often em-ulate the actions of predecessors, even when those actions contradict their own privateinformation?”. The canonical model of Banerjee [4], and of Bikhchandani, Hirshleiferand Welch [5], provided a simple and intuitive answer: “when the history is sufficientlyskewed against her private information, a rational agent’s optimal action is to ignore itand follow the history.” They do so by presenting a model in which agents, who receivea noisy private signal over an underlying state of nature, are called to act sequentially.Agent signals are binary, and their quality is commonly known. Each agent sees theactions of previous arrivals, updates her belief over the space set, using the informationrevealed by this action history, and chooses the action which maximizes her expectedutility. Due to this elegant, yet simple, model, they show that in any game trajectory,at some point, agents will follow in the footsteps of those who precede them, even if, apriori , their signal favors the other action. These models generated great interest fromboth experimental economists, trying to construct information cascades in a lab (see [2]),as well as from econometricians, trying to estimate the effect of social learning on fielddata (see [13]). The main challenge facing those practitioners originated from the binarysignal structure. In those models, a cascade emerges whenever the number of agentswho took one action exceeded those who took the other by two. To circumvent this the-oretical limitation, researchers resorted to attempting to estimate “trace evidence” forthe occurrence of cascades (see [13]), assuming agents make mistakes ([2]), or assumingthat agent valuation for both actions fluctuates (see [8]). All these methods violate thesimple intuition mentioned above.A second wave of research on social learning originated from Smith and Sørensen[12]. In those models, the signal structure is assumed to be abstract. Their focus is oftenon the asymptotic efficiency of the public belief convergence process. If the public beliefconverges to an interior point (as in the binary models), then an information cascade occurs. If the convergence is to either zero or one, then learning occurs . Smith andSørensen [12] contributed two major results to the discussion: (1) For learning to occur,one requires that for every history, there will always be a positive chance that the agentwill choose a contrary action (a condition they call unbounded signals ), otherwise aninformation cascade occurs in finite time. (2) They distinguished between an informationcascade (i.e., convergence of the public belief to an interior point) and an action cascade(i.e., at some point the agent chooses an action with probability one). Herrera andHørner [11] present a condition for the occurrence of information cascades, when thedomain is compact and the information structure satisfy several technical conditions. InSection 2.1 we show that the condition does not hold in the general case and present acounterexample.Smith and Sørensen [12]’s results generated great interest among theoreticians at-tempting to challenge it in various information structures (see [1]), agent utility func-tions (see [7]), or market structures (see [3]). However, the vast majority of those results2 R A F T focus on asymptotic analysis, and often struggle to provide economic insight on agents’short-term behavior.In this work we attempt to bridge this gap by presenting a method to generate signalstructures which are richer than the binary signals models, yet are more tractable thanthe abstract signal models. Therefore our method can be used both to perform empiricalanalysis while maintaining the core intuition of information cascades, and to constructexamples which convey economic short term behavior, thus supplementing theoreticalresearch. As an additional theoretic contribution, we present a necessary and sufficientcondition for the occurrence of action cascades in general signal structures.The structure of the paper is as follows. In Section 1 we present our model. In Section2 we revisit the results of [12] and discuss its economic significance in the short term.Section 2.1 contains our condition for action cascades. In Section 3, we demonstrate theapplicability of our method to econometric analysis by reverse engineering the experimentof [2]. In Section 4, we conclude. There are two possible states of nature, ω ∈ Ω = { , } . The prior probability of therealized state being is denoted by Pr( ω = 1) = µ . This prior probability is commonlyknown, and is often dubbed the initial public belief. There is a countable set of agents N. The set of available actions for agent t is A t = A ≡ { , } for all t ∈ N. Agent utilitiesare determined by the realized state in the following manner, u t ( a t ) = if a t = ω − if a t = ω . Agents arrive sequentially by a predetermined order. Without loss of generality, wedenote each agent by her arrival time. That is, for all t ∈ N, we assume that agent t arrives at period t. Each agent receives a private signal s t ∈ S, where S is the set ofpossible signals and is identical for all agents.We follow Banerjee [4] and Bikhchandani et al. [5] and assume that the signal setis binary, i.e. S = { , } . Let q t = Pr( s t = ω ) , where ω ∈ { , } is the realized state ofnature. In the canonical models mentioned above, a common assumption is that q t = q for every t. That is, the quality of all agent signals is the same. We diverge from thisassumption and assume that the signal quality q t is also private and is independentlydrawn from a known set Q ⊆ [ , , according to some distribution F ( · ) . Note that F ( · ) isnot state-dependent and is commonly known. We denote by f the corresponding density(if Q is a continuous sample space or PMF otherwise).Agents observe all actions taken previously to their arrival. Let H t ⊆ [0 , t − denotethe set of possible action histories at time t, where H = {∅} . Let H = ∪ t ≥ H t be the3 R A F T set of all finite histories, and H ∞ = H ∪ { , } ∞ be the set of all infinite histories.A strategy for agent t is a measurable function σ t : H t × S × Q → ∆( A ) which mapsevery history to a decision rule. We denote a profile of agent strategies by ¯ σ = ( σ t ) t ≥ . A strategy profile ¯ σ, together with the information structure ( S, Q, F ) , and the initialpublic belief µ induces a probability distribution P ¯ σ over Ω × H ∞ × S ∞ × Q ∞ . Wedefine the public belief at time t , µ t = P ¯ σ ( ω = 1 | h t ) as the probability that the stateis , conditional on the realized history h t before t ’s action. Agents update their beliefsusing Bayes’ rule, and hence the expected utility of an agent for action a = 1 can bewritten as, u t ( a t = 1 | s t = 0 , q t ) = µ t (1 − q t ) µ t (1 − q t )+(1 − µ t ) q t − (1 − µ t ) q t µ t (1 − q t )+(1 − µ t ) q t (1) u t ( a t = 1 | s t = 1 , q t ) = µ t q t µ t q t +(1 − µ t )(1 − q t ) − (1 − µ t )(1 − q t ) µ t q t +(1 − µ t )(1 − q t ) (2)This agent with s t = 0 will play a = 1 whenever µ t > q t . Similarly an agent with s t = 1 will play a t = 1 whenever µ t > − q t . As the value of q t is unknown to future arriving agents, calculating the updatingrule requires some work. To do so we use the distribution of signal qualities F toderive a distribution over possible agent posteriors. For every pair s t , q t we denote by x ( s t , q t ) = P r ( ω = 1 | µ = , q t , s t ) . Note that for all agents other than t , x ( s t , q t ) is arandom variable. We suppress the notation s t , q t and let the agent type x be a randomvariable describing agent t ’s posterior belief whenever µ t = .Let ¯ q = sup Q. Recalling the definition of quality q t = Pr( s t = ω ) , x is a randomvariable with support in [1 − ¯ q, ¯ q ] with the following state-conditional densities g ω ( · ) forstate ω ∈ { , } , g ( x ) = xf ( x ) if x ∈ [ , ¯ q ] xf (1 − x ) if x ∈ [1 − ¯ q, ] (3) g ( x ) = (1 − x ) f ( x ) if x ∈ [ , ¯ q ](1 − x ) f (1 − x ) if x ∈ [1 − ¯ q, ] (4)and define G ω ( x ) = R x g ω ( z ) dz, as the CDF of the state conditional distributions.Note that for any quality distribution F, the ratio g ( x ) g ( x ) equals x − x , thus increasingin x. I.e., the type distribution exhibits the Monotone (increasing) Likelihood RatioProperty (MLRP). By Bayes’ rule, agent t ’s private posterior µ := Pr( ω = 1 | h t , x t ) satisfies µ − µ = µ t − µ t g ( x t ) g ( x t ) = µ t − µ t x t − x t and therefore the optimal strategy of agent t is a threshold strategy. That is, for every µ t , there exists ˜ x ( µ t ) ∈ [1 − ¯ q, ¯ q ] such that whenever x < ˜ x ( µ ) , a t = 0 and whenever x > ˜ x ( µ ) , a t = 1 . We denote µ + t := Pr( ω = 1 | h t , a t = 1) and µ − t := Pr( ω = 1 | h t , a t = 0) , R A F T and formulate the updating rules as follows µ + t +1 − µ + t +1 = µ t − µ t − G (˜ x ( µ t ))1 − G (˜ x ( µ t )) (5) µ − t +1 − µ − t +1 = µ t − µ t G (˜ x ( µ t )) G (˜ x ( µ t )) (6)Unlike the abstract signal structure of Smith and Sørensen [12], for a given F ( · ) , µ , and Q, equations 5 and 6 can be used to calculate the updated public belief following anyfinite length history h t = { a , a , . . . , a t − } . In addition, Agent t , with type x, will play a t = 1 whenever x − x > − µ t µ t An up-cascade occurs whenever µ t > ¯ q and a down-cascade occurs whenever µ t < − ¯ q. The cascade regions, unsurprisingly, are identical to those of the binary model of Banerjee[4] and Bikhchandani et. al. [5]. The difference is in the time of convergence, i.e. in thenumber of consecutive actions required to induce a cascade. In the following exampleswe examine this aspect using several distribution families.
To illustrate the uses of the method described above, we introduce a simple example.Assume that q t ∼ U [ , ¯ q ] for every t .By equations (3) and (4) we can calculate the following distributions, G ( x ) = Z x − ¯ q r q − dr = if x < − ¯ q x − (1 − ¯ q ) q − if x ∈ [1 − ¯ q, ¯ q ]1 if x > ¯ q and G ( x ) = Z x − ¯ q (1 − r ) 1¯ q − dr = if x < − ¯ q ¯ q − (1 − x ) q − if x ∈ [1 − ¯ q, ¯ q ]1 if x > ¯ q. Assume that µ = , h = { , } and ¯ q = . We can calculate agent thresholds inthe following way: x − x = 1 ⇒ x = ⇒ µ + = ⇒⇒ x = ⇒ µ ++ − µ ++ = − G ( )1 − G ( ) 1 − G ( )1 − G ( ) ⇒ µ ++ = ≈ . ⇒ x = > As shown in [4, 5], in the classic binary signal model, the public belief following everyhistory is determined by the initial public belief and the difference between the number5 R A F T of a = 1 and a = 0 taken. Whenever this difference is greater than two, agent actionsare no longer informative, thus a cascade occurs and µ t +1 = µ t . Note that here, unlike inthe case of the classical model with binary signals, after h = { , } , agent still plays a = 0 with positive probability. This attribute makes possible more direct methods ofempirical analysis (as we show in Section 3), but also allows us to gain further insightinto the interim periods of the observational learning process, as we show in the followingsection. In their seminal work, Smith and Sørensen [12] generalized the game information struc-ture from one in which the signals are binary to a structure in which abstract signals aredrawn from one of two state-dependent distributions. This extension provided importantinsights into the governing forces of herding. Their first result stated that when the ini-tial belief is not in the cascade region and signals are not discrete, information cascadesdo not occur as the public belief never crosses into the cascade region, but convergesto its border. Their second result stated that despite the scarcity of information cas-cades, the history of actions will “settle” on an alternative. They identified a necessaryand sufficient condition under which the public belief converges to the true state of theworld.Smith and Sørensen classified the game information structure into bounded or un-bounded beliefs. When signals are unbounded, at any public belief, and after any history,there is always a positive chance that an agent will receive a signal strong enough toinduce a contrary action. In our model, this translates to ¯ q = 1 . In this section we revisittheir classic results using the example from Section 1.1.In the table below we calculated the probability of a contrary action following ahistory with an initial belief of and a sequence of 1,2,4, and 8 consecutive actions forseveral values of maximal signal quality ¯ q . One can see that when signals are unbounded,the probability of a contrary action remains significant even after 8 consecutive actions.In addition, note that when signals are bounded, the public belief stabilizes rapidly tothe border of the cascade region, yet it never crosses it. In addition, note that the effectsignal boundedness has on the process of social learning can best be witnessed when thesequence of consecutive actions is sufficiently long. For example, when the history is { , } , the probability of a = 1 is roughly the same for all levels of ¯ q . However, when h = { , , , , , , , } , the probability of a = 1 , is greater by an order of magnitude,from . when ¯ q = 0 . to . when ¯ q = 1 . R A F T µ = h ¯ q = 0 .
55 ¯ q = 0 .
66 ¯ q = 0 .
77 ¯ q = 1 P r ( a = 1 | h ) µ t +1 ( h ) P r ( a = 1 | h ) µ t +1 ( h ) P r ( a = 1 | h ) µ t +1 ( h ) P r ( a = 1 | h ) µ t +1 ( h ) {0} 0.25 0.475 0.25 0.42 0.25 0.365 0.25 0.25{0,0} 0.125 0.4625 0.125 0.382 0.138 0.304 0.167 0.167{0,0,0,0} 0.032 0.454 0.0374 0.352 0.05 0.257 0.1 0.1{0,0,0,0,0,0,0,0} 0.0021 0.450 0.0034 0.341 0.008 0.234 0.0556 0.0556 Table 1: The probability of a contrary action and convergence of public belief whenthe maximal signal quality varies, in the family of information structures presented inSection 1.1.
When generalizing the binary model to one with abstract signals, Smith and Sørensendistinguished between two types of cascades: (1) a public belief cascade, i.e., a case inwhich the public belief converges to an interior point, which occurs whenever signals arebounded, and (2) an action cascade, which describes a case where the agent actions con-verge to a single one, occuring whenever the public belief crosses into the cascade region.For the latter Smith and Sørensen [12] stated that it occurs with positive probabilitywhen the distribution tails contains atoms. In [11], Herrera and Hörner study the exis-tence of action cascades in models with continuous sample spaces and prove that, undersome technical conditions, when signals are continuous, an action cascade occurs if andonly if the information structure does not exhibit the increasing hazard ratio property(IHRP). In this section we show that the condition provided in [11] does not hold in thegeneral case. We do so by constructing a counter-example. In addition, in Theorem 1we provide an alternative necessary and sufficient condition for the occurrence of actioncascades in the general case.To study the occurrence of action cascades we study a generalized version of theexample presented in Section 1.1. In this family of information structures we assume thatthe agent signal qualities are distributed uniformly between [¯ q, ¯ q ] for some ≥ ¯ q ≥ ¯ q ≥ . When ¯ q = we get the example from Section 1.1, and when ¯ q = ¯ q we get the binarysignal model of [4, 5]. Using Python, we calculated the number of consecutive a = 1 actions required for action cascades, starting at µ = ¯ q when ¯ q = 0 . . The results,depending on the of value of ¯ q , are plotted in Figure 1. In [11], they require that the information structure satisfies MLRP, that the signal space is compact,and that the density for every x is bounded away from zero. By [11], (strict) IHRP holds if the following mapping is increasing with signal x, H ( x ) = − G ( x )1 − G ( x ) g ( x ) g ( x ) . R A F T . .
55 0 . .
65 0 . .
75 0 . Values of ¯ q N u m b e r o f c o n cec u t i v e a = a c t i o n s . Time to cascade when ¯ q = 0 . and varing ¯ q. Figure 1: The occurrence of action cascades when signal qualities are drawn from auniform distribution over [ q, ¯ q ] (for ¯ q = 0 . and varied q ).By Figure 1, one can see that action cascades do not occur for ¯ q < . and occurafter the history h t = { } , µ = ¯ q, whenever ¯ q ≥ . . From this one can rule out atomsin the distribution tails as the cause for cascades as our signals are continuous yet actioncascades do occur whenever ¯ q > . . In addition, the IHRP condition of Herrera andHörner also seems inaccurate as our results demonstrate a counter-example. E.g, when ¯ q = 0 . , the information structure violates the IHRP condition (see the orange curvein Figure 2), yet action cascades do not occur. In the following section we provide anecessary and sufficient condition for action cascades. The attentive reader may think that this contradiction is due to the fact that our example violatesHerrera and Hörner’s assumption of a compact domain. However, in Appendix A, we provide an examplewith a compact domain, which violates IHRP, yet no action cascade occurs. R A F T . . . . . . . . . . . x − G ( x ) − G ( x ) g ( x ) g ( x ) q = 0 . q = 0 . q = 0 . q = 0 . Figure 2: Hazard ratio for ¯ q = 0 . and various q. In the following theorem we provide the condition for action cascades in the generalMLRP case (not only in the family of information structures described in our model). Wetherefore slightly alter our notation. Let X denote an abstract set of signals. At state ω ∈{ , } , agent signals are independently drawn from a state-dependent distribution F ω . Weassume that no signal perfectly reveals the state, i.e. that F , F are mutually absolutelycontinuous with respects to each other. We denote the Radon-Nikodym derivative of F ω with respect to the probability measure F + F by f ω . We denote the signal structure bounds as follows, ¯ x = arg min f ( x ) f ( x ) + f ( x ) , ¯ x = arg max f ( x ) f ( x ) + f ( x ) . We assume that ¯ x, ¯ x ∈ X and that the information structure exhibits the monotonelikelihood property, i.e. f ( x ) f ( x ) increases with x. For example, in the information structuresgenerated by our example from Section 2, X = [1 − ¯ q, − ¯ q ] ∪ [¯ q, ¯ q ] , ¯ x = ¯ q and ¯ x = 1 − ¯ q. Hereafter, for readability we slightly abuse notation, and denote the state conditionalCDF by F ω ( x ) and the state conditional PDF (or PMF if discrete) by f ω ( x ) . In the following theorem we provide a necessary and sufficient condition for actioncascades.
Theorem 1.
Let X ⊆ [¯ x, ¯ x ] be defined as above. If the distributions F , F exhibitMLRP, and the prior public belief µ := Pr( ω = 1) is such that an action cascade is notin progress, then Note that when X is finite f ω is the probability mass function of F ω and when X is continuous, f ω is simply its density R A F T a necessary condition, and a sufficient one when strict, that an action up-cascade canstart is that ∃ x ∈ X for which, f (¯ x ) f (¯ x ) ≥ f ( x ) f ( x ) 1 − F ( x )1 − F ( x ) (7) and a necessary condition, and a sufficient one when strict, that an action down-cascadecan start is that ∃ x ∈ X for which f (¯ x ) f (¯ x ) ≤ f ( x ) f ( x ) F ( x ) F ( x ) (8) Proof.
The agent’s expected utility for each action is u ( a = 1 | x, µ ) = f ( x ) µf ( x ) µ + f ( x )(1 − µ ) − f ( x ) µf ( x ) µ + f ( x )(1 − µ ) (9) u ( a = 0 | x, µ ) = − f ( x ) µf ( x ) µ + f ( x )(1 − µ ) + f ( x ) µf ( x ) µ + f ( x )(1 − µ ) (10)Thus, the agent chooses a = 1 whenever f ( x ) f ( x ) > − µµ , a = 0 whenever f ( x ) f ( x ) < − µµ ,and is indifferent otherwise.Therefore, whenever f (¯ x ) f (¯ x ) > − µµ , an up-cascade is in progress, and conversely, if anup-cascade is in progress, f (¯ x ) f (¯ x ) ≥ − µµ . Whenever f (¯ x ) f (¯ x ) < − µµ , a down-cascade is inprogress, and conversely, if a down-cascade is in progress, f (¯ x ) f (¯ x ) ≤ − µµ .We denote the threshold signal by ˜ x ( µ ) = sup { x ∈ X | f ( x ) f ( x ) ≤ − µµ } . Following action a, other agents will update their belief µ in the following manner. − µ + µ + = − µµ − F (˜ x ( µ ))1 − F (˜ x ( µ )) (11) − µ − µ − = − µµ F (˜ x ( µ )) F (˜ x ( µ )) (12)where µ + , µ − denote the updated public belief following actions a = 1 , a = 0 respectively.So, if action a = 1 starts an up-cascade, we must have f (¯ x ) f (¯ x ) ≥ − µ + µ + = 1 − µµ − F (˜ x ( µ ))1 − F (˜ x ( µ )) ≥ f (˜ x ( µ )) f (˜ x ( µ )) 1 − F (˜ x ( µ ))1 − F (˜ x ( µ )) . which is (7) with ˜ x ( µ ) substituted for x . For sufficiency note that when this inequalityis strict, and the public belief is f (˜ x ( µ )) f (˜ x ( µ ))+ f (˜ x ( µ )) , a cascade starts after the next action,if it is a = 1 . The second part of the theorem is due to symmetric considerations.In our example, equation (7) holds whenever there exist x ∈ [1 − ¯ q, − ¯ q ] ∪ [¯ q, ¯ q ] forwhich the following holds, − xx q − ¯ q ) − x + (1 − ¯ q ) q − ¯ q ) − ¯ q + (1 − x ) ≥ − ¯ q ¯ q . When ¯ q = 0 . , by Theorem 1, action cascades occur whenever ¯ q ≥ . .10 R A F T Furthermore, when the signal distribution is discrete at ¯ x, an a = 1 cascade mayoccur, and when it is discrete around ¯ x an a = 0 action cascade may occur, as there ispositive probability for such a posterior. To see this note that when signals are binarywith quality q, equation (7), for s t = 0 can be written as, − qq − qq ≤ − qq . This inequality holds for every q > . . In addition, as shown by Herrera and Hørner [11] for continuous distributions over acompact domain, and under some technical conditions (specifically, the density for every x ∈ X is above some positive constant), the existence of a solution to equation (7) isdetermined by whether or not the information structure exhibits IHRP (see Proposition2 in [11]). In order to demonstrate the applicability of our method for econometric estimation werevisit the experimental work of Anderson and Holt [2]. In [2] the authors conductedan experiment, designed to generate information cascades in a controlled environment.In the experiment we revisit, each subject received a private draw from an urn whichcontained either two ‘ a ’ labeled balls and one ‘ b ’ labeled ball or two ‘ b ’ labeled balls andone ‘ a ’ labeled ball. Following this process, subjects were sequentially asked to declareout loud whether they think the urn is a majority-‘ a ’ urn or a majority-‘ b ’ urn. Thisprocess yielded 90 sequences of declarations. The authors then conducted an economet-ric analysis to prove the existence of information cascades which was constructed on thebinary signal model with a known probability of agents’ mistakes.Our model assumes that agents vary in their ability to trust the informational valueof their private signal. Under this assumption we attempt to “reverse engineer” the signalquality in the aforementioned experiment. We base our analysis on the signal structurepresented in Section 1.1 and use Hansen’s General Method of Moments (GMM) estima-tion procedure [9], which have been shown to be consistent, efficient, and asymptoticallynormal estimators, to estimate the subject’s signal quality. In GMM estimation, the goal is to find the model parameters for which the distancebetween a vector of model moments and those of the data is minimized. In social learningmodels, a natural choice for the model moments is
Pr( a t = 1 | h t ) for some subset of The experimental data for Anderson and Holt’s symmetric experiment is available at . We maintain the required assumption of a weakly stationary ergodic stochastic process by selectingour moment conditions which are described bellow. R A F T histories h t ∈ A ⊆ H t . i.e., for every h ∈ H, we define the moment conditions as, Pr( a t = 1 | h, q ) − X a t h t = h . where h t = h is an indicator function which return 1 if the history up to action a t equal to h and zero otherwise and q is the parameter which defines the distribution as introducedin Section 1.1.We denote the proportion of a = 1 actions taken following the history h t in the databy φ h t and denote the corresponding vector of conditional proportions following each ofthe histories in our chosen subset by φ . For every q ∈ [0 . , we can calculate recursivelythe conditional probability φ ( q ) h t = Pr( a t = 1 | h t , q ) . We denote the vector of conditionalmodel probabilities by ˆ φ ( q ) . The GMM estimator ˆ q, is the value for which the distancebetween the two vectors is minimized, i.e, ˆ q = argmin q ∈ [0 . , || ˆ φ ( q ) − φ || . Due to technical reasons we chose to focus on all histories of length two , i.e. A = [00 , , , . In this approach we have 4 conditions and attempt to estimatea single parameter, thus the model is identified and can be estimated. As there are moreconditions than parameters, we are in an over-identified estimation case, and thus weuse a two-step estimation to find the optimal weighting matrix. The two-steps efficient GMM estimation resulted in ˆ q = 0 . with a standarddeviation of . . Slightly above (7.5%) the actual signal quality of q = . Furthermore,note that q = is inside the 99% confidence interval CI = [0 . , . . The literature of social learning can be roughly divided into two groups, one in whichthe information structure is limited to a binary signal, and another in which the sig-nal structure is assumed to be abstract. The lack of a method to generate richer, yettractable signal structures poses a challenge for incorporating these models in appliedtheory or econometric work as the structure in the former group is too limited for themajority of complex environments or estimation methods, while the structure in thelatter model group, while extremely useful for asymptotic analysis, imposes difficulties We chose to exclude longer histories as in several sessions, Anderson and Holt [2] introduced apublic signal after the third round. Additionally, by choosing only length two histories, we verify thatthe assumption of a weakly stationary ergodic stochastic process holds yielding a consistent GMMestimator. The method of two-step efficient GMM estimation is described on Chapter 3 of[10]. The Python code used for estimation, adapted from [6], is available available at https://github.com/morankor/practical_inf_cascades/ . R A F T when attempting to analyze finite horizon results or attempting to convey economicsignificance.In this paper we attempt to bridge this gap by presenting a method of generatingsignal structures which are richer than the binary model of [4, 5], yet is more tractablethan the abstract model of [12]. We demonstrate the advantage of our approach byrevisiting two classical papers [12, 2].Our goal in this paper is to provide new tools for applied theory and empirical researchon social learning. Theoreticians can utilize our model to easily generate examples, per-form numeric calculations, and complement their asymptotic results. Econometriciansand experimenters can use our method to construct estimation methods which directlyassess the effect of information cascades. In addition, we make a theoretical contributionby providing a necessary and sufficient condition for the occurrence of action cascades.13 R A F T References [1] Daron Acemoglu, Asuman Ozdaglar, and Ali Parandehgheibi. Spread of Misinfor-mation in Social.
Games and Economic Behavior , 2010.[2] By Lisa R Anderson and Charles A Holt. Information Cascades in the Laboratory.
The American Economic Review , 87(5):847–862, 1997.[3] Itai Arieli and Manuel Mueller-Frank. Multidimensional Social Learning.
The Re-view of Economic Studies , 86(3):913–940, may 2019.[4] Abhijit V. Av Banerjee. A Simple Model of Herd Behavior.
The Quarterly Journalof Economics , 107(3):797–817, 1992.[5] Sushil Bikhchandani, David Hirshleifer, and Ivo Welch. A theory of fads, fashion,custom, and cultural change as informational cascades.
Journal of political Economy ,100(5):992–1026, 1992.[6] Richard W. Evans. Generalized Method of Moments (GMM) Estimation, 2018.[7] Erik Eyster, Andrea Galeotti, Navin Kartik, and Matthew Rabin. Congested ob-servational learning.
Games and Economic Behavior , 87:519–538, 2014.[8] Jacob K. Goeree, Thomas R. Palfrey, Brian W. Rogers, and Richard D. McKelvey.Self-correcting information cascades.
Review of Economic Studies , 74(3):733–762,2007.[9] Lars Peter Hansen. Large Sample Properties of Generalized Method of MomentsEstimators.
Econometrica , 50(4):1029–1054, 1982.[10] Fumio Hayashi.
Econometrics . Princeton University Press, Princton, NJ, 2000.[11] Helios Herrera and Johannes Hörner. A Necesssary and Sufficient Condition forInformation Cascades. 2012.[12] B Y Lones Smith and Peter Sørensen. Pathological Outcomes of ObservationalLearning.
Econometrica , 68(2):371–398, 2000.[13] Juanjuan Zhang. The Sound of Silence: Observational Learning in the U.S. KidneyMarket.
Marketing Science , 29(2), 2010.14 R A F T A An example of IHRP violation without action cascades.
Consider an alternative information structure. Assume that agent signals are drawnfrom (1 − c, c ) for some c ∈ [ , . In state ω = 0 , the signals are uniformly drawn, i.e forall x ∈ (1 − c, c ) , g ( x ) = c − , and for all x / ∈ (1 − c, c ) , g ( x ) = 0 . In state ω = 1 thesignals are drawn by, g ( x ) = − ε c − if x ∈ (1 − c, ) ε c − if x ∈ [ , c )0 otherwise , for some ε ∈ (0 , . Now, for some δ ∈ (0 , , assume that with probability δ, agent signals are generatedby our example from Section 2.1 (with the parameters ¯ q > c and ¯ q = c ), and withprobability (1 − δ ) signals are generated as described above. Note that the resultingcompound distributions have a compact domain, it exhibits the MLRP property (byconstruction), yet for sufficiently large ε, it violates the IHRP property.Using Python, we calculate the example for δ = 0 . , c = 0 . , ¯ q = 0 . , and ε =0 . ..