Opinion fluctuations and disagreement in social networks
Daron Acemoglu, Giacomo Como, Fabio Fagnani, Asuman Ozdaglar
aa r X i v : . [ c s . S I] O c t MATHEMATICS OF OPERATIONS RESEARCH
Vol. 00, No. 0, Xxxxx 0000, pp. 000–000 issn | eissn | | | INFORMS doi (cid:13)
Authors are encouraged to submit new papers to INFORMS journals by means ofa style file template, which includes the journal title. However, use of a templatedoes not certify that the paper has been accepted for publication in the named jour-nal. INFORMS journal templates are for the exclusive purpose of submitting to anINFORMS journal and should not be used to distribute the papers in print or onlineor to submit the papers to another publication.
Opinion fluctuations and disagreement in socialnetworks
Daron Acemoglu
Giacomo Como
Fabio Fagnani
Dipartimento di Scienze Matematiche, Politecnico di Torino, corso Stati Uniti 24, 10129, Torino, Italy, [email protected],http://calvino.polito.it/ fagnani/indexeng.html
Asuman Ozdaglar
We study a tractable opinion dynamics model that generates long-run disagreements and persistent opinionfluctuations. Our model involves an inhomogeneous stochastic gossip process of continuous opinion dynamicsin a society consisting of two types of agents: regular agents , who update their beliefs according to informationthat they receive from their social neighbors; and stubborn agents , who never update their opinions andmight represent leaders, political parties or media sources attempting to influence the beliefs in the rest ofthe society. When the society contains stubborn agents with different opinions, the belief dynamics neverlead to a consensus (among the regular agents). Instead, beliefs in the society fail to converge almost surely,the belief profile keeps on fluctuating in an ergodic fashion, and it converges in law to a non-degeneraterandom vector.The structure of the graph describing the social network and the location of the stubborn agents withinit shape the opinion dynamics. The expected belief vector is proved to evolve according to an ordinarydifferential equation coinciding with the Kolmogorov backward equation of a continuous-time Markov chainon the graph with absorbing states corresponding to the stubborn agents, and hence to converge to aharmonic vector, with every regular agent’s value being the weighted average of its neighbors’ values, andboundary conditions corresponding to the stubborn agents’ beliefs. Expected cross-products of the agents’beliefs allow for a similar characterization in terms of coupled Markov chains on the graph describing thesocial network.We prove that, in large-scale societies which are highly fluid , meaning that the product of the mixingtime of the Markov chain on the graph describing the social network and the relative size of the linkages tostubborn agents vanishes as the population size grows large, a condition of homogeneous influence emerges,whereby the stationary beliefs’ marginal distributions of most of the regular agents have approximately equalfirst and second moment.
Key words : opinion dynamics, multi-agent systems, social networks, persistent disagreement, opinionfluctuations, social influence.
MSC2000 subject classification : Primary: 91D30, 60K35; Secondary: 93A15
OR/MS subject classification : Primary: Games/group decisions: stochastic; Secondary: Markov processes,Random walk
History : cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
1. Introduction
Disagreement among individuals in a society, even on central questions thathave been debated for centuries, is the norm; agreement is the rare exception. How can disagree-ment of this sort persist for so long? Notably, such disagreement is not a consequence of lackof communication or some other factors leading to fixed opinions. Disagreement remains even asindividuals communicate and sometimes change their opinions.Existing models of communication and learning, based on Bayesian or non-Bayesian updatingmechanisms, typically lead to consensus provided that communication takes place over a stronglyconnected network (e.g., Smith and Sorensen [46], Banerjee and Fudenberg [7], Acemoglu, Dahleh,Lobel and Ozdaglar [2], Bala and Goyal [6], Gale and Kariv [25], DeMarzo, Vayanos and Zwiebel[19], Golub and Jackson [26], Acemoglu, Ozdaglar and ParandehGheibi [3], Acemoglu, Bimpikisand Ozdaglar [1]), and are thus unable to explain persistent disagreements. One notable exceptionis provided by models that incorporate a form of homophily mechanism in communication, wherebyindividuals are more likely to exchange opinions or communicate with others that have similarbeliefs, and fail to interact with agents whose beliefs differ from theirs by more than some givenconfidence threshold. This mechanism was first proposed by Axelrod [5] in the discrete opiniondynamics setting, and then by Krause [30], and Deffuant and Weisbuch [18], in the continuousopinion dynamics framework. Such belief dynamics typically lead to the emergence of differentasymptotic opinion clusters (see, e.g., [34, 10, 14]), but fail to explain persistent opinion fluctuationsin the society, as well as the role of influential agents in the opinion formation process. In fact, thelatter phenomena have been empirically observed and reported in the social science literature, see,e.g., the stream of work originated with Kramer’s paper [29] documenting large swings in votingbehavior within short periods, and the sizable literature in social psychology (e.g., Cohen [13])documenting changes in political beliefs as a result of parties or other influential organizations.In this paper, we investigate a tractable opinion dynamics model that generates both long-rundisagreement and opinion fluctuations. We consider an inhomogeneous stochastic gossip model ofcommunication wherein there is a fraction of stubborn agents in the society who never changetheir opinions. We show that the presence of stubborn agents with competing opinions leads topersistent opinion fluctuations and disagreement among the rest of the society.More specifically, we consider a society envisaged as a social network of n interacting agents (orindividuals), communicating and exchanging information. Each agent a starts with an opinion (orbelief) X a (0) ∈ R and is then activated according to a Poisson process in continuous time. Followingthis event, she meets one of the individuals in her social neighborhood according to a pre-specifiedstochastic process. This process represents an underlying social network . We distinguish betweentwo types of individuals, stubborn and regular. Stubborn agents, which are typically few in number,never change their opinions: they might thus correspond to media sources, opinion leaders, orpolitical parties wishing to influence the rest of the society, and, in a first approximation, not gettingany feedback from it. In contrast, regular agents, which make up the great majority of the agentsin the social network, update their beliefs to some weighted average of their pre-meeting beliefand the belief of the agent they met. The opinions generated through this information exchangeprocess form a Markov process whose long-run behavior is the focus of our analysis.First, we show that, under general conditions, these opinion dynamics never lead to a consensus(among the regular agents). In fact, regular agents’ beliefs fail to converge almost surely, and keepon fluctuating in an ergodic fashion. Instead, the belief of each regular agent converges in lawto a non-degenerate stationary random variable, and, similarly, the vector of beliefs of all agentsjointly converge to a non-degenerate stationary random vector. This model therefore provides anew approach to understanding persistent disagreements and opinion fluctuations. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Second, we investigate how the structure of the graph describing the social network and thelocation of the stubborn agents within it shape the behavior of the opinion dynamics. The expectedbelief vector is proved to evolve according to an ordinary differential equation coinciding with theKolmogorov backward equation of a continuous-time Markov chain on the graph with absorbingstates corresponding to the stubborn agents, and hence to converge to a harmonic vector, with everyregular agent’s value being the weighted average of its neighbors’ values, and boundary conditionscorresponding to the stubborn agents’ beliefs. Expected cross-products of the agents’ beliefs allowfor a similar characterization in terms of coupled Markov chains on the graph describing the socialnetwork. The characterization of the expected stationary beliefs as harmonic functions is then usedin order to find explicit solutions for some social networks with particular structure or symmetries.Third, in what we consider the most novel contribution of our analysis, we study the behaviorof the stationary beliefs in large-scale highly fluid social networks, defined as networks where theproduct between the fraction of links incoming in the stubborn agent set times the mixing time ofthe associated Markov chain is small. We show that in highly fluid social networks, the expectedvalue and variance of the stationary beliefs of most of the agents concentrate around certain valuesas the population size grows large. We refer to this result as homogeneous influence of stubbornagents on the rest of the society—meaning that their influence on most of the agents in the societyis approximately the same. The applicability of this result is then proved by providing severalexamples of large-scale random networks, including the Erd¨os–R´enyi graph in the connected regime,power law networks, and small-world networks. We wish to emphasize that homogeneous influencein highly fluid societies needs not imply approximate consensus among the agents, whose beliefsmay well fluctuate in an almost uncorrelated way. Ongoing work of the authors is aimed at a deeperunderstanding of this topic.Our main contribution partly stems from novel applications of several techniques of appliedprobability in the study of opinion dynamics. In particular, convergence in law and ergodicity ofthe agents’ beliefs is established by first rewriting the dynamics in the form of an iterated affinefunction system and then proving almost sure convergence of the time-reversed process [20]. Onthe other hand, our estimates of the behavior of the expected values and variances of the stationarybeliefs in large-scale highly fluid networks are based on techniques from the theory of Markovchains and mixing times [4, 31], as well as on results in modern random graph theory [21].In addition to the aforementioned works on learning and opinion dynamics, this paper is relatedto some of the literature in the statistical physics of social dynamics: see [11] and references thereinfor an overview of such research line. More specifically, our model is closely related to some work byMobilia and co-authors [36, 37, 38], who study a variation of the discrete opinion dynamics model,also called the voter model , with inhomogeneities, there referred to as zealots : such zealots areagents which tend to favor one opinion in [36, 37], or are in fact equivalent to our stubborn agentsin [38]. These works generally present analytical results for some regular graphical structures (suchas regular lattices [36, 37], or complete graphs [38]), and are then complemented by numericalsimulations. In contrast, we prove convergence in distribution and characterize the properties ofthe limiting distribution for general finite graphs. Even though our model involves continuous beliefdynamics, we will also show that the voter model with zealots of [38] can be recovered as a specialcase of our general framework.Our work is also related to work on consensus and gossip algorithms, which is motivated bydifferent problems, but typically leads to a similar mathematical formulation (Tsitsiklis [48], Tsit-siklis, Bertsekas and Athans [49], Jadbabaie, Lin and Morse [28], Olfati-Saber and Murray [42],Olshevsky and Tsitsiklis [43], Fagnani and Zampieri [24], Nedi´c and Ozdaglar [39]). In consensusproblems, the focus is on whether the beliefs or the values held by different units (which mightcorrespond to individuals, sensors, or distributed processors) converge to a common value. Ouranalysis here does not focus on limiting consensus of values, but in contrast, characterizes thestationary fluctuations in values. cemoglu, Como, Fagnani, and Ozdaglar:
Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
The rest of this paper is organized as follows: In Section 2, we introduce our model of interactionbetween the agents, describing the resulting evolution of individual beliefs, and we discuss twospecial cases, in which the arguments simplify particularly, and some fundamental features of thegeneral case are highlighted. Section 3 presents convergence results on the evolution of agent beliefsover time, for a given social network: the beliefs are shown to converge in distribution, and tobe an ergodic process, while in general they do not converge almost surely. Section 4 presents acharacterization of the first and second moments of the stationary beliefs in terms of the hittingprobabilities of two coupled Markov chains on the graph describing the social network. Section 5presents explicit computations of the expected stationary beliefs and variances for some specialnetwork topologies. Section 6 provides bounds on the level of dispersion of the first two momentsof the stationary beliefs: it is shown that, in highly fluid networks, most of the agents have almostthe same stationary expected belief and variance. Section 7 presents some concluding remarks.
Basic Notation and Terminology
We will typically label the entries of vectors by elements offinite alphabets, rather than non-negative integers, hence R I will stand for the set of vectors withentries labeled by elements of the finite alphabet I . An index denoted by a lower-case letter willimplicitly be assumed to run over the finite alphabet denoted by the corresponding calligraphicupper-case letter (e.g. P i will stand for P i ∈I ). For any finite set J , we use the notation J to denote the indicator function over the set J , i.e., J ( j ) is equal to 1 if j ∈ J , and equalto 0 otherwise. For a matrix M ∈ R I×J , M T ∈ R J ×I will stand for its transpose, || M || for its2-norm. For a probability distribution µ over a finite set I , and a subset J ⊆ I we will write µ ( J ) := P j µ j . If ν is another probability distribution on I , we will use the notation || µ − ν || T V := P i | µ i − ν i | = sup {| µ ( J ) − ν ( J ) | : J ⊆ I} , for the total variation distance between µ and ν . Theprobability law (or distribution) of a random variable Z will be denoted by L ( Z ). Continuous-timeMarkov chains on a finite set V will be characterized by their transition rate matrix M ∈ R V×V ,which has zero row sums, and whose non-diagonal elements are nonnegative and correspond tothe rates at which the chain jumps from a state to another (see [41, Ch.s 2-3]). If V ( t ) and V ′ ( t )are Markov chains on V , defined on the same probability space, we will use the notation P v ( · ),and P vv ′ ( · ), for the conditional probability measures given the events V (0) = v , and, respectively,( V (0) , V ′ (0)) = ( v, v ′ ). Similarly, for some probability distribution π over V (possibly the stationaryone), P π ( · ) := P v,v ′ π v π v ′ P vv ′ ( · ) will denote the conditional probability measure of the Markovchain with initial distribution π , while E v [ · ], E v,v ′ [ · ], and E π [ · ] will denote the correspondingconditional expectations. For two non-negative real-valued sequences { a n : n ∈ N } , { b n : n ∈ N } ,we will write a n = O ( b n ) if for some positive constant K , a n ≤ Kb n for all sufficiently large n , a n = Θ( b n ) if b n = O ( a n ), a n = o ( b n ) if lim n a n /b n = 0.
2. Belief evolution model
We consider a finite population V of interacting agents, of pos-sibly very large size n := |V| . The connectivity among the agents is described by a simple directedgraph −→G = (cid:0) V , −→E (cid:1) , whose node set is identified with the agent population, and where −→E ⊆ V × V \D ,with D := { ( v, v ) : v ∈ V} , stands for the set of directed links among the agents. At time t ≥
0, each agent v ∈ V holds a belief (or opinion ) about an underlying state of the world,denoted by X v ( t ) ∈ R . The full vector of beliefs at time t will be denoted by X ( t ) = { X v ( t ) : v ∈ V} .We distinguish between two types of agents: regular and stubborn. Regular agents repeatedlyupdate their own beliefs, based on the observation of the beliefs of their out-neighbors in −→G . Stubborn agents never change their opinions, i.e., they do not have any out-neighbors. Agentswhich are not stubborn are called regular . We will denote the set of regular agents by A , the setof stubborn agents by S , so that the set of all agents is V = A ∪ S (see Figure 1). Notice that we don’t allow for parallel links or loops. cemoglu, Como, Fagnani, and Ozdaglar:
Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Figure 1.
A social network with seven regular agents (colored in grey), and five stubborn agents (colored in white,and black, respectively). The presence of a directed link ( v, v ′ ) indicates that agent v is influenced by the opinion ofagent v ′ . Therefore, links are only incoming to the stubborn agents, while links between pairs of regular agents maybe uni- or bi-directional. More specifically, the agents’ beliefs evolve according to the following stochastic update process.At time t = 0, each agent v ∈ V starts with an initial belief X v (0). The beliefs of the stubbornagents stay constant in time: X s ( t ) = X s (0) =: x s , s ∈ S , t ≥ . In contrast, the beliefs of the regular agents are updated as follows. To every directed link in −→E of the form ( a, v ), where necessarily a ∈ A , and v ∈ V , a clock is associated, ticking at the times ofan independent Poisson process of rate r av >
0. If the ( a, v )-th clock ticks at time t , agent a meetsagent v and updates her belief to a convex combination of her own current belief and the currentbelief of agent v : X a ( t ) = (1 − θ av ) X a ( t − ) + θ av X v ( t − ) , (1)where X v ( t − ) stands for the left limit lim u ↑ t X v ( u ). Here, the scalar θ av ∈ (0 ,
1] is a trust parameter that represents the confidence that the regular agent a ∈ A puts on agent v ’s belief. That r av and θ av are strictly positive for all ( a, v ) ∈ −→E is simply a convention (since if r av θ av = 0, one canalways consider the subgraph of −→G obtained by removing the link ( a, v ) from −→E ). Similarly, wealso adopt the convention that r vv ′ = θ vv ′ = 0 for all v, v ′ ∈ V such that ( v, v ′ ) / ∈ −→E (hence, includingloops v ′ = v ). For every regular agent a ∈ A , let S a ⊆ S be the subset of stubborn agents which arereachable from a by a directed path in −→G . We refer to S a as the set of stubborn agents influencing a . For every stubborn agent s ∈ S , A s := { a : s ∈ S a } ⊆ A will stand for the set of regular agents influenced by s .The tuple N = (cid:16) −→G , { θ e } , { r e } (cid:17) contains the entire information about patterns of interactionamong the agents, and will be referred to as the social network . Together with an assignment ofa probability law for the initial belief vector, L ( X (0)), the social network designates a society .Throughout the paper, we make the following assumptions regarding the underlying social network. Assumption 1.
Every regular agent is influenced by some stubborn agent, i.e., S a is non-emptyfor every a in A . We have imposed that at each meeting instance, only one agent updates her belief. The model can be easily extendedto the case where both agents update their beliefs simultaneously, without significantly affecting any of our generalresults. cemoglu, Como, Fagnani, and Ozdaglar:
Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
Assumption 1 may be easily removed. If there are some regular agents which are not influenced byany stubborn agent, then there is no link in E connecting the set R of such regular agents to V \ R .Then, one may decompose the subgraph obtained by restricting G to R into its communicatingclasses, and apply the results in [24] (see Example 3.5 therein), showing that, with probability one,a consensus on a random belief is achieved on every such communicating class.We denote the total meeting rate of agent v ∈ V by r v , i.e., r v := P v ′ r vv ′ , and the total meetingrate of all agents by r , i.e., r := P v r v . We use N ( t ) to denote the total number of agent meetings(or link activations) up to time t ≥
0, which is simply a Poisson arrival process of rate r . We alsouse the notation T ( k ) to denote the time of the k -th belief update , i.e., T ( k ) := inf { t ≥ N ( t ) ≥ k } .To a given social network, we associate the matrix Q ∈ R V×V , with entries Q vw := θ vw r vw Q vv := − X v ′ = v Q vv ′ , v = w ∈ V . (2)In the rest of the paper, we will often consider a continuous-time Markov chain V ( t ) on V withtransition rate matrix Q .The following example describes the canonical construction of a social network from an undi-rected graph, and will be used often in the rest of the paper. Example 1.
Let G = ( V , E ) be a connected multigraph, and S ⊆ V , A = V \ S . Define thedirected graph −→G = ( V , −→E ), where ( a, v ) ∈ −→E if and only if a ∈ A , v ∈ V \ { a } , and { a, v } ∈ E ,i.e., −→G is the directed graph obtained by making all links in E bidirectional except links betweena regular and a stubborn agent, which are unidirectional (pointing from the regular agent tothe stubborn agent). For v, w ∈ V , let κ v,w denote the multiplicity of the link { v, w } in E (eachself-loop contributing as 2), and let d v = P w κ v,w be the degree of node v in G . (In particular, κ a,v = E ( { a, v } ) if G is a simple graph, i.e., it does not contain neither loops nor parallel links.)Let the trust parameter be constant, i.e., θ av = θ ∈ (0 ,
1] for all ( a, v ) ∈ −→E . Define r av = d − a κ a,v V\{ a } ( v ) , a ∈ A , v ∈ V . (3)This concludes the construction of the social network N = (cid:16) −→G , { θ e } , { r e } (cid:17) . In particular, one has Q av = θκ a,v /d a , ∀ ( a, v ) ∈ −→E . Observe that connectedness of G implies that Assumption 1 holds. Finally, notice that nothingprevents the multigraph G from having (possibly parallel) links between two nodes both in S .However, such links do not have any correspondence in the directed graph −→G , and in fact they areirrelevant for the belief dynamics, since stubborn agents do not update their beliefs.We conclude this section by discussing in some detail two special cases whose simple structuresheds light on the main features of the general model. In particular, we consider a social networkwith a single regular agent and a social network where the trust parameter satisfies θ av = 1 for all a ∈ A and v ∈ V . We show that in both of these cases agent beliefs fail to converge almost surely. Consider a society consisting of a single regular agent, i.e., A = { a } , and two stubborn agents, S = { s , s } (see Fig. 2(a)). Assume that r as = r as = 1 / θ as = θ as = 1 / x s = 0, x s = 1, and X a (0) = 0. Then, one has for all t ≥ X a ( t ) = X ≤ k ≤ N ( t ) k − N ( t ) − B ( k ) , I.e., E is a multi-set of unordered pairs of elements of V . This allows for the possibility of considering parallel linksand loops. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) a s s (a) a (t) (b) a (t) (c) Figure 2.
Typical sample-path behavior of the belief of the regular agent in the simple social network topologydepicted in (a). In (b) the actual belief process X a ( t ), fluctuating ergodically on the interval [0 , X a . where N ( t ) is the total number of agent meetings up to time t (or number of arrivals up to time t of a rate-1 Poisson process), and { B ( k ) : k ∈ N } is a sequence of Bernoulli(1 /
2) random variables,independent mutually and from the process N ( t ). Observe that, almost surely, arbitrarily longstrings of contiguous zeros and ones appear in the sequence { B ( k ) } , while the number of meetings N ( t ) grows unbounded. It follows that, with probability onelim inf t →∞ X a ( t ) = 0 , lim sup t →∞ X a ( t ) = 1 , so that the belief X a ( t ) does not converge almost surely.On the other hand, observe that, since P k>n − k | B ( k ) | ≤ − n , the series X a := P k ≥ − k B ( k ) issample-wise converging. It follows that, as t grows large, the time-reversed process ←− X a ( t ) := X ≤ k ≤ N ( t ) − k B ( k )converges to X a , with probability one, and, a fortiori, in distribution. Notice that, for all positiveinteger k , the binary k -tuples { B (1) , . . . , B ( k ) } and { B ( k ) , . . . , B (1) } are uniformly distributedover { , } k , and independent from the Poisson arrival process N ( t ). It follows that, for all t ≥ ←− X a ( t ) has the same distribution as X a ( t ). Therefore, X a ( t ) converges indistribution to X a as t grows large. Moreover, it is a standard fact (see e.g. [45, pag.92]) that X a is uniformly distributed over the interval [0 , X a ( t ) isasymptotically uniform on [0 , θ is = θ is ′ = θ ∈ (0 , X a ( t ) = θ X ≤ k ≤ N ( t ) (1 − θ ) N ( t ) − k B ( k )converges in law to the stationary belief X a := θ (1 − θ ) − X k ≥ (1 − θ ) k B ( k ) . (4)As explained in [20, Section 2.6], for every value of θ in (1 / , X a issingular, and in fact supported on a Cantor set. In contrast, for almost all values of θ ∈ (0 , / cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) t=0 tu=0 u= U uu=t Figure 3.
Duality between the voter model with zealots and the coalescing Markov chains process with absorbingstates. The network topology is a line with five regular agents and two stubborn agents placed in the two extremities.The time index for the opinion dynamics, t , runs from left to right, whereas the time index for the coalescing Markovchains process, u , runs from right to left. Both dotted and solid arrows represent meeting instances. Fixing a timehorizon t >
0, in order to trace the beliefs X ( t ), one has to follow coalescing Markov chains starting at u = 0 in thedifferent nodes of the network, and jumping from a state to another one in correspondence to the solid arrows. Theparticles are represented by bullets at times of their jumps. Clusters of coalesced particles are represented by bulletsof increasing size. the probability law of X a is absolutely continuous with respect to Lebesgue’s measure. In theextreme case θ = 1, it is not hard to see that X a ( t ) = B ( N ( t )) converges in distribution to a randomvariable X a with Bernoulli(1 /
2) distribution. On the other hand, observe that, regardless of thefine structure of the probability law of the stationary belief X a , i.e., on whether it is absolutelycontinuous or singular, its moments can be characterized for all values of θ ∈ (0 , X a is given by E [ X a ] = θ (1 − θ ) − X k ≥ (1 − θ ) k E [ B ( k )] = θ X k ≥ (1 − θ ) k
12 = 12 , and, using the mutual independence of the B ( k )’s, the variance of X a is given byVar[ X a ] = θ (1 − θ ) − X k ≥ (1 − θ ) k Var[ B ( k )] = θ X k ≥ (1 − θ ) k
14 = θ − θ ) . Observe that the expected value of X a is independent from θ , while its variance increases from 0to a maximum of 1 / θ is increased from 0 to 1. We now consider the special case when the social networktopology −→G is arbitrary, and θ av = 1 for all ( a, v ) ∈ E . In this case, whenever a link ( a, v ) ∈ −→E isactivated, the regular agent a adopts agent v ’s current opinion as such, completely disregardingher own current opinion.This opinion dynamics, known as the voter model , was introduced independently by Clifford andSudbury [12], and Holley and Liggett [27]. It has been extensively studied in the framework ofinteracting particle systems [32, 33]. While most of the research focus has been on the case whenthe graph is an infinite lattice, the voter model on finite graphs, and without stubborn agents, was See [44]. In fact, explicit counterexamples of values of θ ∈ (0 , /
2) for which the asymptotic measure is singular areknown. For example, Erd¨os [22, 23] showed that, if θ = (3 − √ /
2, then the probability law of X a is singular. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) considered, e.g., in [15, 17], [4, Ch. 14], and [21, Ch. 6.9]: in this case, consensus is achieved in somefinite random time, whose distribution depends on the graph topology only.In some recent work [38] a variant with one or more stubborn agents (there referred to as zealots )has been proposed and analyzed on the complete graph. We wish to emphasize that such votermodel with zealots can be recovered as a special case of our model, and hence our general results,to be proven in the next sections, apply to it as well. However, we briefly discuss this special casehere, since proofs are much more intuitive, and allow one to anticipate some of the general results.The main tool in the analysis of the voter model is the dual process , which runs backward in timeand allows one to identify the source of the opinion of each agent at any time instant. Specifically,let us focus on the belief of a regular agent a at time t >
0. Then, in order to trace X a ( t ), one hasto look at the last meeting instance of agent a that occurred no later than time t . If such a meetinginstance occurred at some time t − U ∈ [0 , t ] and the agent met was v ∈ V , then the belief of agent a at time t coincides with the one of agent v at time t − U , i.e., X a ( t ) = X v ( t − U ). The next step isto look at the last meeting instance of agent v occurred no later than time t − U ; if such an instanceoccurred at time t − U ∈ [0 , t − U ], and the agent met was w , then X a ( t ) = X v ( t − U ) = X w ( t − U ).Clearly, one can iterate this argument, going backward in time, until reaching time 0. In this way,one implicitly defines a continuous-time Markov chain V a ( u ) with state space V , which starts at V a (0) = a and stays put there until time U , when it jumps to node v and stays put there in thetime interval [ U , U ), then jumps at time U to node w , and so on. It is not hard to see that,thanks to the fact that the meeting instances are independent Poisson processes, the Markov chain V a ( u ) has transition rate matrix Q . In particular, it halts when it hits some state s ∈ S . Thisshows that L ( X a ( t )) = L ( X V a ( t ) (0)) . More generally, if one is interested in the joint probabilitydistribution of the belief vector X ( t ), then one needs to consider n − |S| continuous-time Markovchains, { V a ( t ) : a ∈ A} each one starting from a different node in A (specifically, V a (0) = a for all a ∈ A ), and run simultaneously on V (see Figure 3). These Markov chains move independently withtransition rate matrix Q , until the first time that they either meet, or they hit the set S : in theformer case, they stick together and continue moving on V as a single particle, with transition ratematrix Q ; in the second case, they halt. This process is known as the coalescing Markov chainsprocess with absorbing set S . Then, one gets that L ( { X a ( t ) : a ∈ A} ) = L ( { X V a ( t ) (0) : a ∈ A} ) . (5)Equation (5) establishes a duality between the voter model with zealots and the coalescing Markovchains process with absorbing states. In particular, Assumption 1 implies that, with probabilityone, each V a ( u ) will hit the set S in some finite random time T a S , so that in particular the vector { V a ( u ) : a ∈ A} converges in distribution, as u grows large, to an S A -valued random vector { V a ( T a S ) : a ∈ A} . It then follows from (5) that X ( t ) converges in distribution to a stationary belief vector X whose entries are given by X s = x s for every stubborn agent s ∈ S , and X a = x V a ( T a S ) for everyregular agent a ∈ A .
3. Convergence in distribution and ergodicity of the beliefs
This section is devoted tostudying the convergence properties of the random belief vector X ( t ) for the general update modeldescribed in Section 2. Figure 4 reports the typical sample-path behavior of the agents’ beliefsfor a simple social network with population size n = 4, and line graph topology, in which the twostubborn agents are positioned in the extremes and hold beliefs x < x . As shown in Fig. 4(b), thebeliefs of the two regular agents, X ( t ), and X ( t ), fluctuate persistently in the interval [ x , x ]. Onthe other hand, the time averages of the two regular agents’ beliefs rapidly approach a limit value,of 2 x / x / x / x / cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
321 00 (a) (t)X (t)X (t)X (t) t (b) (t)Z (t)Z (t)Z (t) t (c) Figure 4.
Typical sample-path behavior of the beliefs, and their ergodic averages for a social network with populationsize n = 4. The topology is a line graph, displayed in (a). The stubborn agents correspond to the two extremes ofthe line, S = { , } , while their constant opinions are x = 0, and x = 1. The regular agent set is A = { , } . Theconfidence parameters, and the interaction rates are chosen to be θ av = 1 /
2, and r av = 1 /
3, for all a = 1 ,
2, and v = a ±
1. In picture (b), the trajectories of the actual beliefs X v ( t ), for v = 0 , , ,
3, are reported, whereas picture(c) reports the trajectories of their ergodic averages Z v ( t ) := t − R t X v ( u )d u . converge almost surely: we have seen this in the special cases of Section 2.1, while a general resultin this sense will be stated as Theorem 2. On the other hand, we will prove that, regardless ofthe initial regular agents’ beliefs, the belief vector X ( t ) is convergent in distribution to a randomstationary belief vector X (see Theorem 1), and in fact it is an ergodic process (see Corollary 1).In order to prove Theorem 1, we will rewrite X ( t ) in the form of an iterated affine functionsystem [20]. Then, we will consider the so-called time-reversed belief process. This is a stochasticprocess whose marginal probability distribution, at any time t ≥
0, coincides with the one of theactual belief process, X ( t ). In contrast to X ( t ), the time-reversed belief process is in general notMarkov, whereas it can be shown to converge to a random stationary belief vector with probabilityone. From this, we recover convergence in distribution of the actual belief vector X ( t ).Formally, for any time instant t ≥
0, let us introduce the projected belief vector Y ( t ) ∈ R A , where Y a ( t ) = X a ( t ) for all a ∈ A . Let I A ∈ R A×A be the identity matrix, and for a ∈ A , let e ( a ) ∈ R A bethe vector whose entries are all zero, but for the a -th which equals 1. For every positive integer k ,consider the random matrix A ( k ) ∈ R A×A , and the random vector B ( k ) ∈ R A , defined by A ( k ) = I A + θ aa ′ (cid:0) e ( a ) e T( a ′ ) − e ( a ) e T( a ) (cid:1) B ( k ) = 0 , if the k -th activated link is ( a, a ′ ) ∈ −→E , with a, a ′ ∈ A , and A ( k ) = I A − θ as e ( a ) e T( a ) B ( k ) = e ( a ) θ as x s , if the k -th activated link is ( a, s ) ∈ −→E , with a ∈ A , and s ∈ S . Define the matrix product −→ A ( k, l ) := A ( l ) A ( l − . . . A ( k + 1) A ( k ) , ≤ k ≤ l , (6)with the convention that −→ A ( k, l ) = I A for k > l . Then, at the time T ( l ) of the l -th belief update,one has Y ( T ( l ) ) = A ( l ) Y ( T − ( l ) ) + B ( l ) = A ( l ) Y ( T ( l − ) + B ( l ) , l ≥ , so that, for all t ≥ Y ( t ) = −→ A (1 , N ( t )) Y (0) + X ≤ k ≤ N ( t ) −→ A ( k + 1 , N ( t )) B ( k ) , (7) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) where we recall that N ( t ) is the total number of agents’ meetings up to time t . Now, define the time-reversed belief process ←− Y ( t ) := ←− A (1 , N ( t )) Y (0) + X ≤ k ≤ N ( t ) ←− A (1 , k − B ( k ) , (8)where ←− A ( k, l ) := A ( k ) A ( k + 1) . . . A ( l − A ( l ) , k ≤ l , with the convention that ←− A ( k, l ) = I A for k > l . The following is a fundamental observation (cf. [20]): Lemma 1.
For all t ≥ , Y ( t ) and ←− Y ( t ) have the same probability distribution.Proof. Notice that { ( A ( k ) , B ( k )) : k ∈ N } is a sequence of independent and identically distributedrandom variables, independent from the process N ( t ). This, in particular, implies that, the l -tuple { ( A ( k ) , B ( k )) : 1 ≤ k ≤ l } has the same distribution as the l -tuple { ( A ( l − k + 1) , B ( l − k + 1)) : 1 ≤ k ≤ l } , for all l ∈ N . From this, and the identities (7) and (8), it follows that the belief vector Y ( t )has the same distribution as ←− Y ( t ), for all t ≥ Y ( t ), the time-reversed belief process ←− Y ( t ) converges almost surely. Lemma 2.
Let Assumption 1 hold. Then, for every value of the stubborn agents’ beliefs { x s } ∈ R S , there exists an R A -valued random variable Y , such that, P (cid:16) lim t →∞ ←− Y ( t ) = Y (cid:17) = 1 , for every initial distribution L ( Y (0)) of the regular agents’ beliefs.Proof. Observe that the expected entries of A ( k ), and B ( k ), are given by E [ A aa ′ ( k )] = Q aa ′ r , E [ A aa ( k )] = 1 + Q aa r , E [ B a ( k )] = 1 r X s Q as x s , for all a = a ′ ∈ A . In particular, E [ A ( k )] is a substochastic matrix. It follows from Perron-Frobenius’theory that the spectrum of E [ A ( k )] is contained in the disk centered in 0 of radius ρ , where ρ ∈ [0 , + ∞ ) is its largest in module eigenvalue, with corresponding left eigenvector y with nonnegativeentries. Moreover, Assumption 1 implies that, for all nonempty subsets J ⊆ A , there exists some j ∈ J and v ∈ V \ J such that ( j, v ) ∈ E (otherwise S j = ∅ for all j ∈ J ). Therefore P a E [ A ja ] ≤ − r − Q jv <
1. Choosing J as the support of the eigenvector y gives ρ P a y a = P a ( E [ A ( k )] T y ) a = P j y j P a E [ A ja ( k )] < P j y j , so that ρ <
1. Then, using the Jordan canonical decomposition, onecan show that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E h ←− A (1 , k ) i(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ ≤ Ck n − ρ k , ∀ k ≥ , where C is a constant depending on E [ A (1)] only. Upon observing that the ←− A (1 , k ) has non-negativeentries, and using the inequality E [max { Z, W } ] ≤ E [ Z ] + E [ W ] valid for all nonnegative-valuedrandom variables Z and W , one gets that E h(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ←− A (1 , k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) i = E (cid:20) max a ′ X a ←− A aa ′ (1 , k ) (cid:21) ≤ X a,a ′ E h ←− A a,a ′ (1 , k ) i ≤ n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E h ←− A (1 , k ) i(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ ≤ Cnk n − ρ k . (9) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
Now, fix some υ ∈ ( ρ, ←− A (1 , k −
1) and B ( k ) it follows that, for all k ≥ P (cid:16)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ←− A (1 , k − B ( k ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ υ k − (cid:17) ≤ υ − k +1 E h ||←− A (1 , k − B ( k ) || i ≤ υ − k +1 E h ||←− A (1 , k − || || B ( k ) || i = υ − k +1 E h ||←− A (1 , k − || i E [ || B ( k ) || ] ≤ β k − , where β k := Cnk n − ( ρ/υ ) k E [ || B (1) || ]. Since P k ≥ β k < ∞ , the above bound and the Borel-Cantellilemma imply that, with probability one, ||←− A (1 , k − B ( k ) || < υ k − for all but finitely many valuesof k ≥
1. Hence, almost surely, the series Y := X k ≥ ←− A (1 , k − B ( k )is absolutely convergent. An analogous argument shows almost sure convergence of ←− A (1 , k ) Y (0) to0, as k grows large. Since, with probability one, N ( t ) goes to infinity as t grows large, one has thatlim t →∞ ←− Y ( t ) = lim t →∞ ←− A (1 , N ( t )) Y (0) + lim t →∞ X ≤ j ≤ N ( t ) ←− A (1 , j − B ( j ) = Y , with probability one. This completes the proof.Lemma 1 and Lemma 2 allow one to prove convergence in distribution of X ( t ) to a random beliefvector X , as stated in the following result. Theorem 1.
Let Assumption 1 hold. Then, for every value of the stubborn agents’ beliefs { x s } ∈ R S , there exists an R V -valued random variable X , such that, for every initial distribution L ( X (0)) satisfying P ( X s (0) = x s ) = 1 for every s ∈ S , lim t →∞ E [ ϕ ( X ( t ))] = E [ ϕ ( X )] , for all bounded and continuous test functions ϕ : R V → R . Moreover, the probability law of thestationary belief vector X is invariant for the system, i.e., if L ( X (0)) = L ( X ) , then L ( X ( t )) = L ( X ) for all t ≥ .Proof. It follows from Lemma 2 ←− Y ( t ) converges to Y with probability one, and a fortiori indistribution. By Lemma 1, ←− Y ( t ) and Y ( t ) are identically distributed. Therefore, Y ( t ) convergesto Y in distribution, and the first part of the claim follows by defining X a = Y a for all a ∈ A , and X s = x s for all s ∈ S . For the second part of the claim, it is sufficient to observe that the distributionof Y = P k ≥ ←− A (1 , k − B ( k ) is the same as the one of Y ′ := A (0) Y + B (0), where A (0), and B (0),are independent copies of A (1), and B (1), respectively.Motivated by Theorem 1, for any agent v ∈ V , we refer to the random variable X v as the stationarybelief of agent v . Using standard ergodic theorems for Markov processes, an immediate implicationof Theorem 1 is the following corollary, which shows that time averages of continuous functions ofagent beliefs with bounded expectation are given by their expectation over the limiting distribution.Choosing the relevant function properly, this enables us to express the empirical averages of, andcorrelations across, agent beliefs in terms of expectations over the limiting distribution, highlightingthe ergodicity of agent beliefs. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Corollary 1.
Let Assumption 1 hold. Then, for every value of the stubborn agents’ beliefs { x s } ∈ R S , with probability one, lim t →∞ t Z t ϕ ( X ( u ))d u = E [ ϕ ( X )] , where X is the stationary belief vector and ϕ : R V → R is any continuous test function such that E [ ϕ ( X )] exists and is finite.Proof. Let Y ( t ) and Y be the projections of the belief vector at time t ≥
0, and of the stationarybelief vector A , respectively, to the regular agents set A . Let ˜ Y (0) be an R A -valued random vector,independent from Y (0) and such that L ( ˜ Y (0)) = L ( Y ). Let Y ( t ) be as in (7), and˜ Y ( t ) = −→ A (1 , N ( t )) ˜ Y (0) + X ≤ k ≤ N ( t ) −→ A ( k + 1 , N ( t )) B ( k ) , where N ( t ) is the total number of agents’ meetings up to time t , and −→ A ( k, l ) is defined as in (6).Then, ˜ Y ( t ) − Y ( t ) = −→ A (1 , N ( t ))( ˜ Y (0) − Y (0)) . Arguing as in the proof of Lemma 2, one shows thatlim t →∞ || ˜ Y ( t ) − Y ( t ) || = 0 , with probability one. Now, for t >
0, let the vectors ˜ X ( t ) and X ( t ) bedefined by ˜ X a ( t ) = ˜ Y a ( t ), X a ( t ) = Y a ( t ) for a ∈ A , and ˜ X s ( t ) = X s ( t ) = x s for s ∈ S , and observethat, with probability one, sup t ≥ | X ( t ) | ≤ max v | X v (0) | < ∞ , sup t ≥ | ˜ X ( t ) | ≤ max v | X v | < ∞ . Then,for every continuous ϕ : R V → R , one has thatlim t →∞ | ϕ ( ˜ X ( t )) − ϕ ( X ( t )) | = 0 , with probability one. On the other hand, stationarity of the process ˜ X ( t ) allows one to apply theergodic theorem (see, e.g., [47, Theorem 6.2.12]), showing that, if E [ ϕ ( X )] exists and is finite, thenlim t →∞ t Z t ϕ ( ˜ X ( s ))d s = E [ ϕ ( X )] , with probability one. Then, for any continuous ϕ such that E [ ϕ ( X )] exists and is finite, one hasthat (cid:12)(cid:12)(cid:12)(cid:12) t Z t ϕ ( X ( s ))d s − E [ ϕ ( X )] (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12) t Z t ϕ ( ˜ X ( s ))d s − E [ ϕ ( X )] (cid:12)(cid:12)(cid:12)(cid:12) + 1 t Z t (cid:12)(cid:12)(cid:12) ϕ ( X ( s )) − ϕ ( ˜ X ( s )) (cid:12)(cid:12)(cid:12) d s t →∞ −→ , with probability one.Theorem 1, and Corollary 1, respectively, show that the beliefs of all the agents converge indistribution, and that their empirical distributions converge almost surely, to a random stationarybelief vector X . In contrast, the following theorem shows that the stationary belief of a regularagent which is connected to at least two stubborn agents with different beliefs is a non-degeneraterandom variable. As a consequence, the belief of every such regular agent keeps on fluctuating withprobability one. Moreover, the theorem shows that the difference between the belief of a regularagent influenced by at least two stubborn agents with different beliefs, and the belief of any otheragent does not converge to zero with probability one, so that disagreement between them persistsin time. For a ∈ A , let X a = { x s : s ∈ S a } denote the set of stubborn agents’ belief values influencingagent a . Theorem 2.
Let Assumption 1 hold, and let a ∈ A be such that |X a | ≥ . Then, the stationarybelief X a is a non-degenerate random variable. Furthermore, P ( X a = X v ) > for all v ∈ V \ { a } . cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
Proof.
With no loss of generality, since the distribution of the stationary belief vector X does notdepend on the probability law of the initial beliefs of the regular agents, we can assume that such alaw is the stationary one, i.e., that L ( X (0)) = L ( X ). Then, Theorem 1 implies that L ( X ( t )) = L ( X )for all t ≥ a ∈ A be such that X a is degenerate. Then, almost surely, X a ( t ) = x a for almost all t , forsome constant x a . Then, as we will show below, all the out-neighbors of a will have their beliefsconstantly equal to x a with probability one. Iterating the argument until reaching the set S a , oneeventually finds that x s = x a for all s ∈ S , so that |X a | = 1. This proves the first part of the claim.For the second part, assume that X a = X a ′ almost surely for some a = a ′ . Then, one can provethat, with probability one, every out-neighbor of a or a ′ agrees with a or a ′ at any time. Iteratingthe argument until reaching the set S a ∪ S a ′ , one eventually finds that |X s ∪ X s ′ | = 1.One can reason as follows in order to see that, if v is an out-neighbor of a , and X a = x a isdegenerate, then X v ( t ) = x a for all t . Let T av ( k ) be the k -th activation of the link ( a, v ). Then,Equation (1) implies that X v ( T av ( k ) ) = x a , ∀ n ≥ . (10)Now, define T ∗ := inf { t ≥ X v ( t ) = x a } , and assume by contradiction that P ( T ∗ < ∞ ) >
0. By thestrong Markov property, and the property that link activations are independent Poisson processes,this would imply that P (first link activated after T ∗ is ( a, v ) | F T ∗ ) > { T ∗ < ∞} , (11)which would contradict (10). Then, necessarily T ∗ = ∞ , and hence X v = X a , with probability one.On the other hand, assume that P ( X a = X v ) = 1 for some v ∈ V \ { a } . Then, with probabilityone X a ( t ) = X v ( t ) for all rational t ≥
0. Since, as proved above, with probability one X a ( t ) is notconstant in t , both X a ( t ) and X v ( t ) should jump simultaneously. However, the probability of thisto occur is zero since link activations are independent Poisson processes. Therefore, necessarily P ( X a = X v ) < a is influenced by stubborn agents with different beliefs, then herstationary belief X a is non-degenerate. By Corollary 1, this implies that, with probability one,her belief X a ( t ) keeps on fluctuating and does not stabilize on a limit. Similarly, Theorem 2 andCorollary 1 imply that, if a regular agents is influenced by stubborn agents with different beliefs,then, with probability one, her belief will not achieve a consensus asymptotically with any otheragent in the society.
4. Expected beliefs and belief crossproducts
In this section, we provide a characteriza-tion of the expected beliefs and belief crossproducts of the agents. In particular we will provideexplicit characterizations of the stationary expected beliefs and belief crossproducts in terms ofhitting probabilities of a pair of coupled Markov chains on −→G = ( V , −→E ). Specifically, we considera coupling ( V ( t ) , V ′ ( t )) of continuous-time Markov chains with state space V , such that both V ( t ) Note that the set of states for such Markov chain corresponds to the set of agents, therefore we use the terms “state”and “agent” interchangeably in the sequel. cemoglu, Como, Fagnani, and Ozdaglar:
Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) (a) (b) (c) Figure 5.
In (a), a network topology consisting of a line with three regular agents and two stubborn agents placedin the extremes. In (b), the directed graph representing the possible state transitions of the corresponding coupledMarkov chains pair ( V ( t ) , V ′ ( t )) when θ e ∈ (0 ,
1) for all e ∈ → E . Such coupled chains pair has 25 states, four of whichare absorbing states. The components of ( V ( t ) , V ′ ( t )) jump independently to neighbor states, unless they are eitheron the diagonal, or one of them is in S : in the former case, there is some chance that the two components jump asa unique one, thus inducing a direct connection along the diagonal; in the latter case, the only component that cankeep moving is the one which has not hit S , while the one who hit S is bound to remain constant from that point on.In (c), the state transition graph is reported for the Markov chains pair ( V ( t ) , V ′ ( t )) in the extreme case when θ e = 1for all e ∈ → E . In this case, the coupled Markov chains are coalescing: once they meet, they stick together, moving as asingle particle, and never separating from each other. This reflects the fact that there are no outgoing links from thediagonal set. and V ′ ( t ) have transition rate matrix Q , as defined in (2). The pair ( V ( t ) , V ′ ( t )) is a Markov chainon the state space V × V with transition rate matrix K whose entries are given by K ( v,v ′ )( w,w ′ ) := Q vw if v = v ′ , v = w , v ′ = w ′ Q v ′ w ′ if v = v ′ , v = w , v ′ = w ′ v = v ′ , v = w , v ′ = w ′ Q vv + Q v ′ v ′ if v = v ′ , v = w , v ′ = w ′ θ vw Q vw if v = v ′ , w = w ′ , v = w (1 − θ vw ) Q vw if v = v ′ , v = w , v ′ = w ′ (1 − θ vw ′ ) Q vw ′ if v = v ′ , v = w , v ′ = w ′ v = v ′ , w = w ′ , w = v , w ′ = v ′ Q vv + P v ′′ = v θ vv ′′ Q vv ′′ if v = v ′ , v = w , v ′ = w ′ . (12)The first four lines of (12) state that, conditioned on ( V ( t ) , V ′ ( t )) being on a pair of non-coincidentnodes ( v, v ′ ), each of the two components, V ( t ) (respectively, V ′ ( t )), jumps to a neighbor node w ,with transition rate Q vw (respectively, to a neighbor node w ′ with transition rate Q v ′ w ′ ), whereasthe probability that both components jump at the same time is zero. On the other hand, the lastfive lines of (12) state that, once the two components have met, i.e., conditioned on V ( t ) = V ′ ( t ) = v ,they have some chance to stick together and jump as a single particle to a neighbor node w , withrate θ vw Q vw , while each of the components V ( t ) (respectively, V ′ ( t )) has still some chance to jumpalone to a neighbor node w with rate (1 − θ vw ) Q vw (resp., to w ′ with rate (1 − θ vw ′ ) Q vw ′ ). In theextreme case when θ vw = 1 for all ( v, w ) in E , the sixth and seventh line of the righthand sideof (12) equal 0, and in fact one recovers the expression for the transition rates of two coalescingMarkov chains: once V ( t ) and V ′ ( t ) have met, they stick together and move as a single particle,never separating from each other. See Figure 5 for a visualization of the possible state transitionsof ( V ( t ) , V ′ ( t )). cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
For v, w, v ′ , w ′ ∈ V , and t ≥
0, we will denote by γ vw ( t ) := P v ( V ( t ) = w ) = P v ( V ′ ( t ) = w ) , η vv ′ ww ′ ( t ) = P vv ′ ( V ( t ) = w, V ′ ( t ) = w ′ ) , (13)the marginal and joint transition probabilities of the two Markov chains at time t . It is a stan-dard fact (see, e.g., [41, Theorem 2.8.3]) that such transition probabilities satisfy the Kolmogorovbackward equations dd t γ vw ( t ) = X ˜ v Q v ˜ v γ ˜ vw ( t ) , dd t η vv ′ ww ′ ( t ) = X ˜ v, ˜ v ′ K ( v,v ′ )(˜ v, ˜ v ′ ) η ˜ v ˜ v ′ ww ′ ( t ) , v, v ′ , w, w ′ ∈ V , (14)with initial condition γ vw (0) = (cid:26) v = w v = w , η vv ′ ww ′ (0) = (cid:26) v, v ′ ) = ( w, w ′ )0 if ( v, v ′ ) = ( w, w ′ ) . (15)The next simple result provides a fundamental link between the belief evolution process intro-duced in Section 2 and the coupled Markov chains, by showing that the expected values andexpected cross-products of the agents’ beliefs satisfy the same linear system (14) of ordinary dif-ferential equations as the transition probabilities of ( V ( t ) , V ′ ( t )). Lemma 3.
For all v, v ′ ∈ V , and t ≥ , it holds dd t E [ X v ( t )] = X w Q vw E [ X w ( t )] , dd t E [ X v ( t ) X v ′ ( t )] = X w,w ′ K ( v,v ′ )( ww ′ ) E [ X w ( t ) X w ′ ( t )] , (16) so that E [ X v ( t )] = X w γ vw E [ X w (0)] , E [ X v ( t ) X v ′ ( t )] = X w,w ′ η vv ′ ww ′ E [ X w (0) X w ′ (0)] . (17) Proof.
Recall that, for the belief update model introduced in Section 2, arbitrary agents’ inter-actions occur at the ticking instants T ( k ) of a Poisson clock of rate r . Moreover, with conditionalprobability r vw /r , any such interaction involves agent v updating her opinion to a convex combi-nation of her current belief and the one of agent w , with weight θ vw on the latter. It follows that,for all k ≥
0, and v ∈ V , E [ X v ( T ( k +1) ) |F T ( k ) ] − X v ( T ( k ) ) = 1 r X w r vw θ vw ( X w ( T ( k ) ) − X v ( T ( k ) )) = 1 r X w Q vw X w ( T ( k ) ) . Then, the above and the fact that the Poisson clock has rate r imply the left-most equation in(16).Similarly, or all v = v ′ ∈ V , one gets that E [ X v ( T ( k +1) ) X v ′ ( T ( k +1) ) |F T ( k ) ] − X v ( T ( k ) ) X v ′ ( T ( k ) )= r P w r vw θ vw (cid:0) X w ( T ( k ) ) X v ′ ( T ( k ) ) − X v ( T ( k ) ) X v ′ ( T ( k ) ) (cid:1) + r P w ′ r v ′ w ′ θ v ′ w ′ (cid:0) X v ( T ( k ) ) X w ′ ( T ( k ) ) − X v ( T ( k ) ) X v ′ ( T ( k ) ) (cid:1) = r P w,w ′ K ( v,v ′ )( w,w ′ ) X w X w ′ , cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) as well as E [ X v ( T ( k +1) ) |F T ( k ) ] − X v ( T ( k ) ) = r P w r vw E h ((1 − θ vw ) X v + θ vw X w ) − X v i = r P w r vw θ vw ( θ vw X w + 2(1 − θ vw ) X v X w − (2 − θ vw ) X v )= r P w Q vw (1 − θ av ) ( X a X v − X a )+ r P w ′ Q vw ′ (1 − θ vw ′ ) ( X v X w ′ − X v )+ r P w Q vw θ vw ( X w − X v )= r P w,w ′ K ( v,v )( w,w ′ ) X w X w ′ . Then, the two identities above and the fact that the Poisson clock has rate r imply the right-mostequation in (16). It follows from (14), (15), and (16) that P w γ vw ( t ) E [ X w (0)] and E [ X v ( t )] satisfythe same linear system of differential equations, and the same holds true for E [ X v ( t ) X v ′ ( t )] and P w,w ′ η vv ′ ww ′ ( t ) E [ X w (0) X w ′ (0)]. This readily implies (17).We are now in a position to prove the main result of this section characterizing the expectedvalues and expected cross-products of the agents’ stationary beliefs in terms of the hitting proba-bilities of the coupled Markov chains. Let us denote by T S and T ′S the hitting times of the Markovchains V ( t ), and respectively V ′ ( t ), on the set of stubborn agents S , i.e., T S := inf { t ≥ V ( t ) ∈ S} , T ′S := inf { t ≥ V ′ ( t ) ∈ S} . Observe that Assumption 1 implies that both T S and T ′S are finite with probability one for everyinitial distribution of the pair ( V (0) , V ′ (0)). Hence, for all v, v ′ ∈ V , we can define the hittingprobability distributions γ v over S , and η vv ′ over S , whose entries are respectively given by γ vs := P v ( V ( T S ) = s ) , s ∈ S ,η vv ′ ss ′ := P vv ′ ( V ( T S ) = s, V ′ ( T ′S ) = s ′ ) , s, s ′ ∈ S . (18)Then, we have the following: Theorem 3.
Let Assumption 1 hold. Then, for every value of the stubborn agents’ beliefs { x s } ∈ R S , E [ X v ] = X s γ vs x s , E [ X v X v ′ ] = X s,s ′ η vv ′ ss ′ x s x s ′ , v, v ′ ∈ V . (19) Moreover, { E [ X v ] : v ∈ V} and { E [ X v X v ′ ] : v, v ′ ∈ V} are the unique vectors in R V , and R V×V respectively, satisfying X v Q av E [ X v ] = 0 , E [ X s ] = x s , ∀ a ∈ A , ∀ s ∈ S , (20) X w,w ′ K ( a,a ′ ) , ( w,w ′ ) E [ X w X w ′ ] = 0 , E [ X v X v ′ ] = E [ X v ] E [ X v ′ ] , ∀ a, a ′ ∈ A , ∀ ( v, v ′ ) ∈ V \ A . (21) Proof.
Assumption 1 implies that lim t →∞ γ vs ( t ) = γ vs for every s ∈ S , and lim t →∞ γ va ( t ) = 0 forevery a ∈ A . Therefore, (17) implies thatlim t →∞ E [ X v ( t )] = X s γ vs x s , ∀ v ∈ V . (22) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13)
Now, if the initial belief distribution L ( X (0)) coincides with the stationary one L ( X ), one has that L ( X ( t )) = L ( X ) for all t ≥
0, so that in particular E [ X v ( t )] = E [ X v ], and hence lim t →∞ E [ X v ( t )] = E [ X v ] for all v . Substituting in the righthand side of (22), this proves the leftmost identity in (19).The rightmost identity in (19) follows from an analogous argument.In order to prove the second part of the claim, observe that the expected stationary beliefs andbelief cross-products necessarily satisfy (20) and (21), since, by Lemma 3, they evolve accordingto the autonomous differential equations (16), and are convergent by the arguments above. On theother hand, uniqueness of the solutions of (20) and (21) follows from [4, Ch. 2, Lemma 27]. Remark 1.
Since the stationary beliefs X v take values in the interval [min s x s , max s x s ], one hasthat both E [ X v ] and E [ X v X v ′ ] exist and are finite. Hence, Corollary 1 implies that the asymptoticempirical averages of the agents’ beliefs and their cross-products, i.e., of the almost surely constantlimits lim t →∞ t Z t X v ( u )d u, lim t →∞ t Z t X v ( u ) X v ′ ( u )d u , v, v ′ ∈ V coincide with the expected stationary beliefs and belief crossproducts, i.e., E [ X v ] and E [ X v X v ′ ],respectively, independently of the distribution of initial regular agents’ beliefs. Remark 2.
As a consequence of Theorem 3, one gets that, if X a = { x ∗ } , then X a = x ∗ , and,by Corollary 1, X a ( t ) converges to x ∗ with probability one. Hence, in particular, E [ X a ] = x ∗ , andVar[ X a ] = 0. This can be thought of as a sort of complement to Theorem 2.
5. Explicit computations of stationary expected beliefs and variances
We presentnow a few examples of explicit computations of the stationary expected beliefs and variances forsocial networks obtained using the construction in Example 1, starting from a simple undirectedgraph G = ( V , E ). Recall that, in this case, Q av = θ/d a for all a ∈ A , and v ∈ V such that { a, v } ∈ E .It then follows from Theorem 3 that the expected stationary beliefs can be characterized as theunique vectors in R V satisfying E [ X a ] = 1 d a X v : { v,a }∈E E [ X v ] , E [ X s ] = x s , ∀ a ∈ A , ∀ s ∈ S . (23)Moreover, in the special case when θ = 1, the second moments of the stationary beliefs are theunique solutions of E [ X a ] = 1 d a X v : { v,a }∈E E [ X v ] , E [ X s ] = x s , ∀ a ∈ A , ∀ s ∈ S . (24) Example 2. ( Tree ) Let us consider the case when G = ( V , E ) is a tree and the stubborn agentset S consists of only two elements, s and s , with beliefs x , and x , respectively. For v, w ∈ V ,let d ( v, w ) denote their distance, i.e., the length of the shortest path connecting them in G . Let m := d ( s , s ), and W = { s = w , w , . . . , w m − , w m = s } , where { w i − , w i } ∈ E for all 1 ≤ i ≤ m ,be the unique path connecting s to s in G . Then, we can partition the rest of the node set as V \ W = S ≤ i ≤ m V w , where V i is the set of nodes v ∈ V \ W such that the unique paths from v to s and s both pass through w i . Since the set of neighbors of every v ∈ V i is contained in V i ∪ { w i } ,(23) implies that E [ X v ] = E [ X w i ] , ∀ v ∈ V i , ≤ i ≤ m . (25)Hence, one is left with determining the values of E [ X w i ], for 0 ≤ i ≤ m . Observe that clearly E [ X w ] = x s , E [ X w m ] = x s . (26) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) E[X v ] Var[X v ]v s s x x ( x − x ) s s Figure 6.
In the left-most figure, expected stationary beliefs and variances (the latter valid for the special casewhen θ = 1) in a social network with a line graph topology with n = 5, and stubborn agents positioned in the twoextremities. The expected stationary beliefs are linear interpolations of the two stubborn agents’ beliefs, while theirvariances follow a parabolic profile with maximum in the central agent, and zero variance for the two stubbornagents s , and s . In the right-most figure, expected stationary beliefs in a social network with a tree-like topology,represented by different levels of gray. The solution is obtained by linearly interpolating between the two stubbornagents’ beliefs, x (white), and x (black), on the vertices lying on the path between s and s , and then extendedby putting it constant on each of the connected components of the subgraph obtained by removing the links of suchpath. On the other hand, for all 0 < i < m , the neighborhood of w i consists of w i − , w i +1 , and possiblysome elements of V i . Then, (23) and (25) imply that E [ X w i ] = 12 E [ X w i − ] + 12 E [ X w i +1 ] , < i < m . (27)Now, observe that, since im x s + m − im x s = 12 ( i − x s + 12 ( m − i + 1) x s + 12 ( i + 1) x s + 12 ( m − i − x s , then the unique solution of (26) and (27) is given by E [ X w i ] = im x s + m − im x s , ≤ i ≤ m . (28)Upon observing that d ( v, s j ) = d ( w i , s j ) for all w ∈ V i , 0 ≤ i ≤ m , and j = 0 ,
1, we may rewrite (25)and (28) as E [ X v ] = x i if |S v | = { x i } , i = 0 , ,d ( v, s ) x + d ( v, s ) x d ( v, s ) + d ( v, s ) , if |S v | = 2 . (29)In other words, the stationary expected beliefs are linear interpolations of the beliefs of the stubbornagents. A totally analogous argument shows that, if the confidence parameter satisfies θ = 1, then(24) is satisfied by E [ X a ] = d ( a, s ) x + d ( a, s ) x d ( a, s ) + d ( a, s ) , so that the stationary variance of agent a ’s belief is given byVar[ X a ] = E [ X a ] − E [ X a ] = d ( a, s ) d ( a, s )( d ( a, s ) + d ( a, s )) ( x − x ) . The two equations above show that the belief of each regular agent keeps on fluctuating ergodicallyaround a value which depends on the relative distance of the agent from the two stubborn agents. cemoglu, Como, Fagnani, and Ozdaglar:
Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) s s s s Figure 7.
Two social network with a special case of tree-like topology, known as star graph, and two stubbornagents. In social network depicted in left-most figure one of the stubborn agents, s , occupies the center, while theother one, s , occupies one of the leaves. There, all regular agents’ stationary beliefs coincide with the belief x of s ,represented in white. In social network depicted in right-most figure, none of the stubborn agents, occupy the center.There, all regular agents’ stationary beliefs coincide with the arithmetic average (represented in gray) of x (white),and x (black). The amplitude of such fluctuations is maximal for central nodes, i.e., those which are homoge-neously distant from both stubborn agents. This can be given the intuitive explanation that, thecloser a regular agent is to a stubborn agent s with respect to the other stubborn agent s ′ , the morefrequent her, possibly indirect, interactions are with agent s and the less frequent her interactionsare with s ′ , and hence the stronger the influence is from s rather than from s ′ . Moreover, the moreequidistant a regular agent a is from s and s , the higher the uncertainty is on whether, in therecent past, agent a has been influenced by either s , or s .On its left-hand side, Figure 6 reports the expected stationary beliefs and their variances fora social network with population size n = 5, line (a special case of tree-like) topology: the twostubborn agents are positioned in the extremities, and plotted in white, and black, respectively,while regular agents are plotted in different shades of gray corresponding to their relative distancefrom the extremities, and hence to their expected stationary belief. In the right-hand side of Figure6, a more complex tree-like topology is reported, again with two stubborn agents colored in white,and black respectively, and with regular agents colored by different shades of gray correspondingto their relative vicinity to the two stubborn agents. Figure 7 reports two social networks withstar topology (another special case of tree). In both cases there are two stubborn agents, coloredin white, and black, respectively. In the left-most picture, the white stubborn agent occupies thecenter, so that all the rest of the population will eventually adopt his belief, and is therefore coloredin white. In the right-most picture, none of the stubborn agents occupies the center, and hence allthe regular agents, hence colored in gray, are equally influenced by the two stubborn agents. Example 3. ( Barbell ) For even n ≥
6, consider a barbell-like topology consisting of two com-plete graphs with vertex sets V , and V , both of size n/
2, and an extra link { a , a } with a ∈ A ,and a ∈ A (see Figure 8). Let S = { s , s } with s = a ∈ V and s = a ∈ V . Then, (23) issatisfied by E [ X a ] = n + 8 x s + n + 4 n + 8 x s if a = a n + 4 n + 8 x s + 4 n + 8 x s if a = a n + 8 x s + n + 6 n + 8 x s if a ∈ A \ { a } n + 6 n + 8 x s + 2 n + 8 x s if a ∈ A \ { a } . cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) s a a s a Figure 8.
A social network with population size n = 12, a barbell-like topology, and two stubborn agents. In eachof the two halves of the graph the expected average beliefs concentrate around the beliefs of the stubborn agent inthe respective half. In particular, observe that, as n grows large, E [ X a ] converges to x s for all a ∈ A , and E [ X a ]converges to x s for all a ∈ A . Hence, the network polarizes around the opinions of the two stubbornagents. Example 4. ( Abelian Cayley graph ) Let us denote by Z m the integers modulo m . Put V = Z dm , and let Θ ⊆ V \ { } be a subset generating V and such that if x ∈ Θ, then also − x ∈ Θ.The Abelian Cayley graph associated with Θ is the graph G = ( V , E ) where { v, w } ∈ E iff v − w ∈ Θ.Notice that Abelian Cayley graphs are always undirected and regular, with d v = | Θ | for any v ∈ V .Denote by e i ∈ V the vector of all 0’s but the i -th component equal to 1. If Θ = {± e , . . . , ± e d } , thecorresponding G is the classical d -dimensional torus of size n = m d . In particular, for d = 1, this isa cycle, while, for d = 2, this is the torus (see Figure 9).Let the stubborn agent set consist of only two elements: S := { s , s } . Then the following formulaholds (see [4, Ch. 2, Corollary 10]): γ vs = P v ( T s < T s ) = E vs − E vs + E s s E s s + E s s (30)where E vw := E v [ T { w } ] denotes the expected time it takes to a Markov chain started at v to hitnode w for the first time. On the other hand, average hitting times E vw can be expressed in termsof the Green function of the graph, which is defined as the unique matrix Z ∈ R V×V such that Z = 0 , ( I − P ) Z = I − n − T , where stands for the all-1 vector. The relation with the hitting times is given by E vw = n − ( Z ww − Z vw ) . (31)(See, e.g., [4, Ch. 2, Lemma 12].)Let P be the stochastic matrix corresponding to the simple random walk on G . It is a standardfact that P is irreducible and its unique invariant probability is the uniform one. (See, e.g., [9,Chapter 15].) Morever, there is an orthonormal basis of eigenvectors for P good for every Θ: if l = ( l , . . . l d ) ∈ V , define Υ l ∈ R V byΥ l ( k ) = m − d/ exp (cid:18) πim l · k (cid:19) , k = ( k , . . . , k d ) ∈ V , cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) s s −2−10123−1−0.500.5 Figure 9.
Two social networks with cycle and 2-dimensional toroidal topology, respectively. (where l · k := P i l i k i ). The corresponding eigenvalues can be expressed as follows λ l = 1 | Θ + | X k ∈ Θ + cos (cid:18) πm l · k (cid:19) where Θ + is any subset of Θ such that for all x ∈ Θ, |{ x, − x } ∩ Θ + | = 1. Hence, Z vw = m − d X l =0 exp (cid:0) πim l · ( v − w ) (cid:1) − | Θ + | P k ∈ Θ + cos (cid:0) πm l · k (cid:1) , v, w ∈ V . (32)From (30), (31), and the fact that E s s = E s s by symmetry, one obtains γ as = 12 + m − d X l =0 exp (cid:0) πim l · ( a − s ) (cid:1) − exp (cid:0) πim l · ( a − s ) (cid:1) − | Θ + | P k ∈ Θ + cos (cid:0) πm l · k (cid:1) m − d X l =0 − cos (cid:0) πm l · ( s − s ) (cid:1) − | Θ + | P k ∈ Θ + cos (cid:0) πm l · k (cid:1) , a ∈ A . (33)The expected stationary beliefs can then be computed using Theorem 3 and (33).
6. Homogeneous influence in highly fluid social networks
In this section, we presentestimates for the expected stationary beliefs and belief variances as a function of the underlyingsocial network. First, we will introduce the notion of fluidity of a social network, a quantity whichdepends only on the geometry of the network and on the size of the stubborn agent set. Then, wewill prove that, when the social network is highly fluid, the influence of the stubborn agents on therest of the society is homogeneous, meaning that most of the regular agents have approximately thesame stationary expected belief and (in the special case when θ e = 1 for all e ∈ −→E ) belief variance. Recall that Theorem 3 allows one toexpress the first moment of the stationary beliefs in terms of the hitting probability distributions γ v on the stubborn agent set S of the continuous-time Markov chain V ( t ) with state space V andtransition rate matrix Q . It is a simple but key observation that such hitting probabilities onlydepend on the restriction of Q to A × V (the other rows affecting only the behavior of V ( t ) afterhitting S ), and they do not change if any row of Q is multiplied by some positive scalar (thismultiplication having the only affect of speeding up or slowing down V ( t ) without changing thejump probabilities). Formally, we have the following cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Lemma 4.
Let P ∈ R V×V be stochastic and such that P av = α a Q av , ∀ a ∈ A , v ∈ V , a = v , (34) for some α a > . Let W ( k ) , for k = 0 , , . . . , be a discrete-time Markov chain with transition prob-ability matrix P . Then, the hitting probability distributions of the continuous-time Markov chain V ( t ) with transition rate matrix Q coincide with those of W ( k ) , i.e., γ vs = P v ( W ( U S ) = s ) , ∀ v ∈ V , s ∈ S , where U S := min { k ≥ W ( k ) ∈ S} . Lemma 4 allows one to consider the discrete-time chain W ( k ) with any transition probabilitymatrix P satisfying (34), in order to compute the hitting probability distributions γ v . In fact, itis convenient to consider stochastic matrices P ∈ R V×V that, in addition to satisfying (34), areirreducible and aperiodic. Let us denote the set of all such matrices by P . Observe that, providedthat Assumption 1 holds, the set P is non-empty, since it can be easily checked to contain, e.g., thematrix P ∈ R V×V with entries P sv = 1 /n , P aa = 0, P aw = − Q aw /Q aa for all s ∈ S , v ∈ V , a ∈ A , and w ∈ V \ { a } . Observe that, for every P ∈ P , irreducibility implies the existence of a unique invariantprobability measure, and, together with aperiodicity, convergence of the time- k distribution p v ( k ) = { p vw ( k ) : w ∈ V} , p vw ( k ) := P v ( W ( k ) = w ) , (35)irrespectively of the initial state v ∈ V . We introduce the following notation. Definition 1.
Given a social network satisfying Assumption 1, and P ∈ P , let π = P ′ π be theunique invariant probability measure of P . Let π ( S ) := X s π s , π ∗ := min v π v , be the size of the stubborn agents set and, respectively, the minimum weight of an agent, asmeasured by π . Moreover, let τ := inf (cid:26) k ≥ v || p v ( k ) − π || T V ≤ e (cid:27) , (36)where p v ( k ) are as in (35), denote the (variational distance) mixing time of the discrete-timeMarkov chain W ( k ) with state space V and transition probability matrix P .For the canonical construction of a social network introduced in Example 1, the quantities abovehave a more explicit characterization which allows for a geometric interpretation. Example 5.
Let us consider the canonical construction of a social network from a given con-nected multigraph G = ( V , E ), outlined in Example 1. Define ˜ P ∈ R V×V by putting ˜ P vw = κ v,w /d v ,where κ v,w is the multiplicity of the link { v, w } in E , and d v = P w κ v,w is the degree of node v in G .Then, put P = ( I + ˜ P ) /
2, where I stands for the identity matrix on V . In fact, P defined as aboveis known in the literature as the transition matrix of the simple lazy random walk on G [31, page9]. Observe that (34) is satisfied, connectedness of G implies irreducibility of ˜ P , and hence of P ,while aperiodicity of P (not necessarily of ˜ P ) is immediate. Hence, P ∈ P . Moreover, the invariantmeasure π (of both P and ˜ P ) is given by π v = d v / ( nd ) , ∀ v ∈ V , where d := n − P v d v is the average degree of G . Observe that, in this construction, π ( S ) = (cid:16)X v d v (cid:17) − X s d s (37) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) is the fraction of the total degree of the stubborn agents set S and the total degree of the wholeagent set V in G . Moreover, the mixing time can be bounded in terms of the conductance of G ,defined as Φ := min n φ ( W ) : W ⊆ V , < X w d w ≤ nd/ o , (38)where φ ( W ) := P w ∈W P v ∈V\W E ( { v, w } ) P w ∈W d w is the bottleneck ratio of the set W ⊆ V , i.e., the ratio between the number of links connecting W to the rest of the node set and its total degree. In particular, standard results imply that14Φ ≤ τ ≤ log 2 eπ ∗ . (39)(See, e.g., [31, Theorem 7.3] for the lower bound, and combine [31, Theorem 12.3] and [21, Theorem6.2.1] in order to get the upper bound.)We conclude this example by observing that different choices of the multigraph G may result inthe same social network, hence in particular in the same directed graph −→G . For example, G couldhave links between pairs of nodes both belonging to S . While such links are irrelevant for the beliefdynamics (they have no correspondence in the directed graph −→G ), they do affect the stochasticmatrix P , hence the quantities π ( S ) and τ . In particular, while removing such links from G clearlyhas the effect of decreasing π ( S ), it also has the potential of increasing the mixing time τ (since lessconnected graphs tend to have larger mixing time). Hence, while the stationary belief distributiondoes not depend on the presence of links connecting pairs of stubborn agents in G , the estimates ofthe stationary beliefs’ moments derived in this section could potentially benefit from consideringthese links.We are now ready to introduce the notion of fluidity of a social network. Definition 2.
Let the social network satisfy Assumption 1. For every P ∈ P let ψ ( P ) := nπ ∗ τ π ( S ) log ( e / ( τ π ( S ))) . (40)The fluidity of the social network is Ψ := sup { ψ ( P ) : P ∈ P} . (41)A sequence of social networks (or, more briefly, a social network) of increasing population size n is highly fluid if Ψ diverges as n grows large.Our estimates will show that for large-scale highly fluid social networks, the first two momentsof the stationary beliefs of most of the regular agents in the population can be approximated bythose of a weighted-mean belief Z , supported in the finite set X := { x s : s ∈ S} , and given by P ( Z = z ) = X s γ s { z } ( x s ) , z ∈ X , γ s := X v π v γ vs , s ∈ S . (42)We refer to the probability distribution { γ s : s ∈ S} as the stationary stubborn agent distribution .Observe that γ s = P π ( W ( U S ) = s ) coincides with the probability that the Markov chain W ( k ),started from the stationary distribution π , hits the stubborn agent s before any other stubbornagent s ′ ∈ S . In fact, as we will clarify below, one may interpret γ s as a relative measure of theinfluence of the stubborn agent s on the society compared to the rest of the stubborn agents s ′ ∈ S .More precisely, let us denote the expected value and variance of the weighted-mean belief Z by E [ Z ] := X s γ s x s , σ Z := X s γ s ( x s − E [ Z ]) . (43) cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Let σ v denote the variance of the stationary belief of agent v , σ v := E [ X v ] − E [ X v ] . We also use the notation ∆ ∗ for the maximum difference between stubborn agents’ beliefs, i.e.,∆ ∗ := max { x s − x s ′ : s, s ′ ∈ S} . (44)The next theorem presents the main result of this section. Theorem 4.
Let Assumption 1 hold. Then, for all ε > , n (cid:12)(cid:12)(cid:12)n v : (cid:12)(cid:12)(cid:12) E [ X v ] − E [ Z ] (cid:12)(cid:12)(cid:12) > ∆ ∗ ε o(cid:12)(cid:12)(cid:12) ≤ ε Ψ . (45) Furthermore, if the trust parameters satisfy θ av = 1 for all ( a, v ) ∈ −→E , then n (cid:12)(cid:12)(cid:12)n v : (cid:12)(cid:12)(cid:12) σ v − σ Z (cid:12)(cid:12)(cid:12) > ∆ ∗ ε o(cid:12)(cid:12)(cid:12) ≤ ε Ψ . (46)This theorem implies that in large-scale highly fluid social networks, as the population size n grows large, the stationary expected beliefs and variances of the regular agents concentrate aroundfixed values corresponding to the expected weighted-mean belief E [ Z ], and, respectively, its variance σ Z (see Figures 10 and 11). We refer to this phenomenon as homogeneous influence of the stubbornagents on the rest of the society—meaning that their influence on most of the agents in the societyis approximately the same. Indeed, it amounts to homogeneous first and second moment of theagents’ stationary beliefs. This shows that in highly fluid social networks, most of the regular agentsare affected by the stubborn agents in approximately the same way.Observe that, provided that nπ ∗ remains bounded from below by a positive constant, as wewill prove to be the case in all the considered examples, a social network is highly fluid whenthe stationary measure π ( S ) of the stubborn agents set vanishes fast enough to compensate forthe possible growth of the mixing time τ , as the network size n grows large. Hence, intuitively,Theorem 4 states that, if the set S and the mixing time τ are both small enough, then the influenceof the stubborn agents will be felt by most of the regular agents much later then the time ittakes them to influence each other, so that their beliefs’ empirical averages and variances willconverge to values very close to each other. Theorem 4 is proved in Section 6.3. Its proof relieson the characterization of the expected stationary beliefs and variances in terms of the hittingprobabilities γ vs . The definition of highly fluid network implies that the (expected) time it takesa Markov chain to hit the set S , when started from most of the nodes, is much larger than themixing time τ . Hence, before hitting S , the chain looses memory of where it started from, andapproaches S almost as if started from the stationary distribution π .It is worth stressing how the condition of homogeneous influence may significantly differ froman approximate consensus. In fact, the former only involves the (first and second moments of)the marginal distributions of the agents’ stationary beliefs, and does not have any implication fortheir joint probability law. A distribution in which the agents’ stationary beliefs are all mutuallyindependent would be compatible with the condition of homogeneous influence, as well as anapproximate consensus condition, which would require the stationary beliefs of most of the agentsto be close to each other with high probability. We will study this topic in ongoing work.Before proving Theorem 4 in Section 6.3, we present some examples of highly fluid social networksin Section 6.2. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) (a) (b) (c)
Figure 10.
Homogeneous influence in Erd¨os-R´enyi graphs of increasing sizes. The population sizes are n = 100 in(a), n = 500 in (b), and n = 2000 in (c), while p = 2 n − log n . In each case, there are two stubborn agents randomlyselected from the node set V with uniform probability and holding beliefs equal to 0, and 1, respectively. The figuresshow the empirical density of the expected stationary beliefs, for typical realizations of such graphs. As predicted byTheorem 4, such empirical density tends to concentrate around a single value as the population size grows large. We now present some examples of familiesof social networks that are highly fluid in the limit of large population size n . All the examples willfollow the canonical social network construction of Examples 1 and 5, starting from an undirectedgraph G .We start with an example of a social network which is not highly fluid. Example 6. ( Barbell ) For even n ≥
6, consider the barbell-like topology introduced in Exam-ple 3. It is not hard to see that the minimum in the right-hand side of (38) is achieved by W = V ,so that the conductance satisfiesΦ = (cid:16) n (cid:16) n − (cid:17) + 1 (cid:17) − ≤ n + 1) . It then follows from (39) that τ ≥ (4Φ) − ≥ ( n + 1) /
16. Since d v ≥ n/ − v , it follows thatthe barbell-like network is never highly fluid provided that |S| ≥
1. In fact, we have already seenin Example 3 that the expected stationary beliefs polarize in this case, so that the influence of thestubborn agents on the rest of the society is not homogeneous.Let us now consider a standard deterministic family of symmetric graphs.
Example 7. ( d -dimensional tori ) Let us consider the case of a d -dimensional torus of size n = m d , introduced in Example 4. Since this is a regular graph, one has π ∗ n = 1, and π ( S ) = |S| /n .Moreover, it is well known that (see, e.g., [31, Theorem 5.5]) τ ≤ C d n /d , for some constant C d depending on the dimension d only. Then, τ π ( S ) ≤ C d |S| n /d − . For d ≥
3, this implies that thesocial network with toroidal topology is highly fluid, and hence homogeneous influence holds,provided that |S| = o ( n − /d ).In contrast, for d ≤
2, our arguments do not allow one to prove high fluidity of the social network.In fact, using the explicit calculations of Example 2, one can see that the stubborn agents’ influenceis not homogeneous in the case d = 1, since the expected stationary beliefs do not concentrate.On the other hand, in the case d = 2, we conjecture that, using the explicit expression (33) andFourier analysis, one should be able to show that the condition |S| = o ( n / ) would be sufficient forhomogeneous influence. In fact, a more general conjecture is that |S| = o ( n − /d ) should suffice forhomogeneous influence, when d ≥
2. Proving this conjecture would require an analysis finer thanthe one developed in this section, possibly based on discrete Fourier transform techniques. Themotivation behind our conjecture comes from thinking of a limit continuous model, which can beinformally summarized as follows. First, recall that the expected stationary beliefs vector solvesthe Laplace equation on G with boundary conditions assigned on the stubborn agent set S . Now, cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) consider the Laplace equation on a d -dimensional manifold with boundary conditions on a certainsubset. Then, in order for the problem to be well-posed, such subset should have dimension d − |S| = Θ( n − /d ) = Θ( m d − ) in order to guarantee that the expectedstationary beliefs vector is not almost constant in the limit of large n .We now present four examples of random graph sequences which have been the object of extensiveresearch. Following a common terminology, we say that some property of such graphs holds withhigh probability, if the probability that it holds approaches one in the limit of large populationsize n . Example 8. ( Connected Erd¨os-R´enyi ) Consider the Erd¨os-R´enyi random graph G = ER ( n, p ), i.e., the random undirected graph with n vertices, in which each pair of distinct verticesis a link with probability p , independently from the others. We focus on the regime p = cn − log n ,with c >
1, where the Erd¨os-R´enyi graph is known to be connected with high probability [21,Thm. 2.8.2]. In this regime, results by Cooper and Frieze [16] ensure that, with high probability, τ = O (log n ), and that there exists a positive constant δ such that δc log n ≤ d v ≤ c log n for eachnode v [21, Lemma 6.5.2]. In particular, it follows that, with high probability, ( π ∗ n ) − ≤ /δ . Hence,using (37), one finds that the resulting social network is highly fluid, provided that |S| = o ( n/ log n ),as n grows large. Figure 10 shows the empirical density of the expected stationary beliefs for typi-cal realizations of Erd¨os-R´enyi graphs of increasing size n = 100 , , |S| = 2. Example 9. ( Fixed degree distribution ) Consider a random graph G = F D ( n, λ ), generatedas follows. Fix V with |V| ≥
2, and let { d v : v ∈ V} be a family of independent and identicallydistributed random variables with P ( d v = k ) = λ k , for k ∈ N . Assume that λ = λ = 0, that λ k > k ≥
2, and that the first two moments P k λ k k , and P k λ k k are finite. Then, let G = F D ( n, λ ) be the multigraph of vertex set V generated by conditioning on the event E n := { P v d v is even } (whose probability converges either to 1 / n grows large) and matchingthe vertices uniformly at random given their degree. (See [21, Ch. 3] for details on this construction)Then, results in [21, Ch. 6.3] show that the mixing time of the lazy random walk on G satisfies τ = O (log n ) with high probability. Therefore, using (37), one finds that the resulting social networkis highly fluid with high probability provided that P s d s = o (cid:0) n/ log n (cid:1) . Example 10. ( Preferential attachment ) The preferential attachment model was introducedby Barabasi and Albert [8] to model real-world networks which typically exhibit a power lawdegree distribution. We follow [21, Ch. 4] and consider the random multigraph G = PA ( n, m )with n vertices, generated by starting with two vertices connected by m parallel links, and thensubsequently adding a new vertex and connecting it to m of the existing nodes with probabilityproportional to their current degree. As shown in [21, Th. 4.1.4], the degree distribution convergesin probability to the power law λ k = 2 m ( m + 1) /k ( k + 1)( k + 2), and the graph is connected withhigh probability [21, Th. 4.6.1]. In particular, it follows that, with high probability, the averagedegree d remains bounded, while the second moment of the degree distribution diverges an n growslarge. On the other hand, results by Mihail et al. [35] (see also [21, Th. 6.4.2]) imply that the mixingtime of the lazy random walk satisfies τ = O (log n ), with high probability. Therefore, thanks to(37), the resulting social network is highly fluid with high probability if P s d s = o (cid:0) n/ log n (cid:1) . Example 11. ( Watts & Strogatz’s small world ) Watts and Strogatz [50], and then New-man and Watts [40] proposed simple models of random graphs to explain the empirical evidencethat most social networks contain a large number of triangles and have a small diameter (the lat-ter has become known as the small-world phenomenon). We consider Newman and Watts’ model,which is a random graph G = N W ( n, k, p ), with n vertices, obtained starting from a Cayleygraph on the ring Z n with generator {− k, − k + 1 , . . . , − , , . . . , k − , k } , and adding to it a Pois-son number of shortcuts with mean pkn , and attaching them to randomly chosen vertices. In this cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) (a) (b) (c) (d) (e) (f)
Figure 11.
Homogeneous influence in preferential attachment networks of increasing sizes. The figures show theempirical density of the expected stationary beliefs, for typical realizations of such graphs. The population sizes are n = 100 in (a) and (d), n = 500 in (b) and (e), and n = 2000 in (c) and (f), while m = 4. In each case, there are twostubborn agents holding beliefs equal to 0, and 1, respectively. In (a), (b), and (c), the stubborn agents are chosento coincide with the two latest attached nodes, and therefore tend to have the lowest degree. In contrast, in (d), (e),and (f), the stubborn agents are chosen to be the two initial nodes, and therefore tend to have the highest degrees.As predicted by Theorem 4, the empirical density of the expected stationary beliefs tends to concentrate around asingle value as the population size grows large. The rate at which this concentration occurs is faster in the top threefigures, where P s d s is smaller, and slower in the bottom three figures, where P s d s is larger. case, the average degree remains bounded with high probability as n grows large, while results byDurrett [21, Th. 6.6.1] show that the mixing time τ = O (log n ). This, and (37) imply that thenetwork is highly fluid with high probability provided that P s d s = o (cid:0) n/ log n (cid:1) . In order to prove Theorem 4, we will obtain estimates on thehitting probability distributions γ v .The following result provides a useful estimate on the total variation distance between the hittingprobability distribution γ v over S and the stationary stubborn agent distribution γ . Lemma 5.
Let the social network satisfy Assumption 1 and P ∈ P . Then, for all P ∈ P , and k ≥ , || γ v − γ || T V ≤ P v ( U S < k ) + exp( − ⌊ k/τ ⌋ ) , v ∈ V , (47) where τ is the mixing time of the discrete-time Markov chain W ( k ) with transition probabilitymatrix P (cf. (36)), and U S := min { k ≥ W ( k ) ∈ S} is the hitting time of such chain on S .Proof. Notice that (47) is trivial when k = 0 or v ∈ S . For k ≥ a ∈ A , one can reason asfollows. Let ˜ U S := inf { k ′ ≥ k : W ( k ′ ) ∈ S} . Thanks to Lemma 4, one has that the distributions of W ( U S ) and W ( ˜ U S ), conditioned on W (0) = a , are given by γ a , and P v p av ( k ) γ v , respectively. Usingthe identity || µ − ν || T V = 12 sup f ∈ [ − , V nX v ( µ v − ν v ) f v o cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement
Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) (see, e.g., [31, Prop. 4.5]), and observing that the event { U S ≥ k } implies { W ( U S ) = W ( ˜ U S ) } , onegets that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) γ a − X v p av ( k ) γ v (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) T V = 12 sup f ∈ [ − , V n E a h f ( W ( U S )) − f ( W ( ˜ U S )) io = 12 sup f ∈ [ − , V n E a h { U S Consider a social network satisfying Assumption 1. Then, n |{ v ∈ V : || γ v − γ || T V ≥ ε } || ≤ ε , for every ε > .Proof. Fix an arbitrary P ∈ P , and let π = P ′ π be its invariant measure. Let W ( k ) be a discrete-time Markov chain with transition probability matrix P . For every nonnegative integer k , station-arity of π and the union bound yield P π ( U S < k ) = P π (cid:16)[ ≤ j Proof of Theorem 4. Let y s := x s + ∆ ∗ / − max { x s ′ : s ′ ∈ S} for all s ∈ S . Clearly | y s | ≤ ∆ ∗ / (cid:12)(cid:12)(cid:12) E [ X v ] − E [ Z ] (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12) X s γ vs x s − X s γ s x s (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12) X s γ vs y s − X s γ s y s (cid:12)(cid:12)(cid:12) ≤ ∆ ∗ || γ v − γ || T V , so that (45) immediately follows from Lemma 6.On the other hand, in order to show (46), first recall that, if θ e = 1 for all e ∈ −→E , then Eq.(12) provides the transition rates of coalescing Markov chains. In particular, if V (0) = V ′ (0), then V ( T S ) = V ′ ( T ′S ), so that η vvss ′ = γ vs if s = s ′ , and η vvss ′ = 0 otherwise. Then, it follows from Theorem3 that σ v = E [ X v ] − E [ X v ] = X s,s ′ η vvss ′ x s x s ′ − (cid:16)X s γ vs x s (cid:17) = X s γ vs x s − (cid:16)X s γ vs x s (cid:17) = 12 X s X s ′ γ vs γ vs ′ ( x s − x s ′ ) . Similarly, σ Z = 12 X s,s ′ γ s γ s ′ ( x s − x s ′ ) , so that | σ v − σ Z | ≤ X s,s ′ | γ vs γ vs ′ − γ s γ s ′ | ( x s − x s ′ ) ≤ X s,s ′ | γ vs γ vs ′ − γ s γ s ′ | ∆ ∗ ≤ X s,s ′ ( γ vs | γ vs ′ − γ s ′ | + γ s ′ | γ vs − γ s | ) ∆ ∗ = || γ v − γ || T V ∆ ∗ . Now, (46) follows again from a direct application of Lemma 6. Remark 3. For a stochastic matrix P ∈ P which is reversible, i.e., such that π v P vw = π w P wv for all v, w ∈ V , (observe that this additional property is enjoyed by the matrix P considered inExample 5 for the canonical construction of a social network from an undirected graph G ) onecan potentially obtain tighter estimates on the homogeneity of the agents’ influence. In fact, onecould use the results on the approximate exponentiality of hitting times (i.e., the property thatthe distribution of T S / E π [ T S ] is close to a rate-1 exponential distribution, see, e.g., [4, Ch. 3.5]) inorder to show that, for a continuous-time Markov chain with transition rate matrix P − I , one has P π ( T S ≥ t ) ≤ ( t + τ ) / E π [ T S ] for all t ≥ 0. Using this bound in place of (48), arguments analogous tothose developed in this section imply that τ / E π [ T S ] = o (1) is a sufficient condition for homogeneousinfluence. Observe that, using Markov’s inequality and (48) with k = ⌊ / (2 π ( S )) ⌋ , gives E π [ T S ] = E π [ U S ] ≥ (cid:22) π ( S ) (cid:23) P π (cid:18) U S ≥ (cid:22) π ( S ) (cid:23)(cid:19) ≥ (cid:22) π ( S ) (cid:23) (cid:18) − π ( S ) (cid:22) π ( S ) (cid:23)(cid:19) ≥ − π ( S )4 π ( S ) . Hence, this argument would provide potentially a weaker sufficient condition for homogenous influ-ence in situations where π ( S ) = o (1 / E π [ T S ]). cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) 7. Conclusion In this paper, we have studied a possible mechanism explaining persistentdisagreement and opinion fluctuations in social networks. We have considered an inhomogeneousstochastic gossip model of continuous opinion dynamics, whereby some stubborn agents in the net-work never change their opinions. We have shown that the presence of these stubborn agents leadsto persistent fluctuations and disagreements among the rest of the society: the beliefs of regularagents do not converge almost surely, and keep on fluctuating in an ergodic fashion. A dualityargument allows for characterizing expected stationary beliefs in terms of the hitting probabilitiesof a Markov chain on the graph describing the social network, while the correlation between thestationary beliefs of any pair of regular agents can be characterized in terms of the hitting probabil-ities of a pair of coupled Markov chains. We have shown that in highly fluid social networks, whoseassociated Markov chains have mixing times which are sufficiently smaller than the inverse of thestubborn agents’ set size, the vectors of the stationary expected beliefs and variances are almostconstant, so that the stubborn agents have homogeneous influence on the rest of the society. Wewish to emphasize that homogeneous influence in highly fluid societies needs not imply approxi-mate consensus among the agents, whose beliefs may well fluctuate in an almost uncorrelated way.A deeper understanding of this topic is ongoing work. Acknowledgments. The authors would like to thank an anonymous Referee for many detailedcomments which significantly helped in improving the presentation. This research was partiallysupported by the NSF grant SES-0729361, the AFOSR grant FA9550-09-1-0420, the ARO grant911NF-09-1-0556, the Draper UR&D program, and the AFOSR MURI R6756-G2. The work ofthe second author was partially supported by the Swedish Research Council through the LCCCLinnaeus Center and the junior research grant ‘Information dynamics over large-scale networks’. References [1] D. Acemoglu, K. Bimpikis, and A. Ozdaglar, Dynamics of information exchange in endogenous socialnetworks Bayesian learning in social networks , Review ofEconomic Studies (2011), no. 4, 1201–1236.[3] D. Acemoglu, A. Ozdaglar, and A. ParandehGheibi, Spread of (mis)information in social networks ,Games and Economic Behavior (2010), no. 2, 194–227.[4] D. Aldous and J. Fill, Reversible Markov chains and random walks on graphs , monograph in preparation,2002.[5] R. Axelrod, The dissemination of culture , Journal of Conflict Resolution (1997), no. 2, 203–226.[6] V. Bala and S. Goyal, Learning from neighbours , Review of Economic Studies (1998), no. 3, 595–621.[7] A. Banerjee and D. Fudenberg, Word-of-mouth learning , Games and Economic Behavior (2004),1–22.[8] A.L. Barabasi and R. Albert, Emergence of scaling in random networks , Science (1999), 509–512.[9] E. Behrends, Introduction to markov chains with special emphasis on rapid mixing , Vieweg Verlag, 2000.[10] V.D. Blondel, J. Hendickx, and J. N. Tsitsiklis, On Krause’s consensus formation model with state-dependent connectivity , IEEE Transactions on Automatic Control (2009), no. 11, 2586–2597.[11] C. Castellano, S. Fortunato, and V. Loreto, Statistical physics of social dynamics , Reviews of ModernPhysics (2009), 591–644.[12] P. Clifford and A. Sudbury, A model for spatial conflict , Biometrika (1973), 581–588.[13] G. Cohen, Party over policy: The dominating impact of group influence on political beliefs , Journal ofPersonality and Social Psychology (2003), no. 5, 808–822.[14] G. Como and F. Fagnani, Scaling limits for continuous opinion dynamics systems , The Annals of AppliedProbability (2011), no. 4, 1537–1567. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) [15] P. Connely and D. Welsh, Finite particle systems and infection models , Mathematical Proceedings ofthe Cambridge Philosophical Society (1982), 167–182.[16] C. Cooper and A. Frieze, The cover time of sparse random graphs , Random Structures and Algorithms (2006), no. 1-2, 1–16.[17] J.T. Cox, Coalescing random walks and voter model consensus times on the torus in Z d , The Annals ofProbability (1989), no. 4, 1333–1366.[18] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch, Mixing beliefs among interacting agents , Advancesin Complex Systems (2000), 87–98.[19] P.M. DeMarzo, D. Vayanos, and J. Zwiebel, Persuasion bias, social influence, and unidimensional opin-ions , The Quarterly Journal of Economics (2003), no. 3, 909–968.[20] P. Diaconis and D. Friedman, Iterated random functions , SIAM Review (1999), no. 1, 45–76.[21] R. Durrett, Random graph dynamics , Cambridge University Press, 2006.[22] P. Erd¨os, On a family of symmetric Bernoulli convolutions , American Journal of Mathematics (1939),974–975.[23] , On the smoothness properties of Bernoulli convolutions , American Journal of Mathematics (1940), 180–186.[24] F. Fagnani and S. Zampieri, Randomized consensus algorithms over large scale networks , IEEE Journalon Selected Areas of Communications (2008), no. 4, 634–649.[25] D. Gale and S. Kariv, Bayesian learning in social networks , Games and Economic Behavior (2003),no. 2, 329–346.[26] B. Golub and M.O. Jackson, Naive learning in social networks and the wisdom of crowds , AmericanEconomic Journal: Microeconomics (2010), no. 1, 112–149.[27] R.A. Holley and T.M. Liggett, Ergodic theorems for weakly interacting infinite systems and the votermodel , The Annals of Probability (1975), no. 4, 643–663.[28] A. Jadbabaie, J. Lin, and S. Morse, Coordination of groups of mobile autonomous agents using nearestneighbor rules , IEEE Transactions on Automatic Control (2003), no. 6, 988–1001.[29] G. H. Kramer, Short-term fluctuations in U.S. voting behavior, 1896-1964 , American Political ScienceReview (1971), no. 1, 131–143.[30] U. Krause, A discrete non-linear and non-autonomous model of consensus formation , pp. 227–236,Gordon and Breach, Amsterdam, 2000.[31] D.A. Levin, Y. Peres, and E.L. Wilmer, Markov chains and mixing times , American MathematicalSociety, 2010.[32] T.M. Liggett, Interacting particle systems , Springer-Verlag, 1985.[33] , Stochastic interacting systems: Contact, voter, and exclusion processes , Springer Berlin, 1999.[34] J. Lorenz, A stabilization theorem for continuous opinion dynamics , Physica A (2005), no. 1, 217–223.[35] M. Mihail, C. Papadimitriou, and A. Saberi, On certain connectivity properties of the internet topology ,Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science 2003, 2004.[36] M. Mobilia, Does a single zealot affect an infinite group of voters? , Physical Review Letters (2003),no. 2, 028701.[37] M. Mobilia and I.T. Georgiev, Voting and catalytic processes with inhomogeneities , Physical Review E (2005), 046102.[38] M. Mobilia, A. Petersen, and S. Redner, On the role of zealotry in the voter model , J. StatisticalMechanics: Theory and Experiments (2007), 447–483.[39] A. Nedi´c and A. Ozdaglar, Distributed subgradient methods for multi-agent optimization , IEEE Trans-actions on Automatic Control (2009), no. 1, 48–61.[40] M.E.J. Newman and D.J. Watts, Renormalization group analysis of the small-world network model ,Physics Letters A (1999), 341–346. cemoglu, Como, Fagnani, and Ozdaglar: Opinion fluctuations and disagreement Mathematics of Operations Research 00(0), pp. 000–000, c (cid:13) Markov chains , Cambridge University Press, 1997.[42] R. Olfati-Saber and R.M. Murray, Consensus problems in networks of agents with switching topologyand time-delays , IEEE Transactions on Automatic Control (2004), no. 9, 1520–1533.[43] A. Olshevsky and J.N. Tsitsiklis, Convergence speed in distributed consensus and averaging , SIAMJournal on Control and Optimization (2009), no. 1, 33–55.[44] Y. Peres and B. Solomyak, Absolute continuity of Bernoulli convolutions, a simple proof , MathematicsResearch Letters (1996), 231–239.[45] M.M. Rao and R.J. Swift, Probability theory with applications , Springer, 2006.[46] L. Smith and P. Sorensen, Pathological outcomes of observational learning , Econometrica (2000),no. 2, 371–398.[47] D.W. Strook, Probability theory: An analytic view , 2nd ed., Cambridge University Press, 2011.[48] J.N. Tsitsiklis, Problems in decentralized decision making and computation , Ph.D. thesis, Dept. of Elec-trical Engineering and Computer Science, Massachusetts Institute of Technology, 1984.[49] J.N. Tsitsiklis, D.P. Bertsekas, and M. Athans, Distributed asynchronous deterministic and stochasticgradient optimization algorithms , IEEE Transactions on Automatic Control (1986), no. 9, 803–812.[50] D.J. Watts and S.H. Strogatz, Collective dynamics of ‘small-world’ networks , Nature393