OOpinion dynamics model based on cognitive biases
Pawel Sobkowicz ∗ KEN 94/140, 02-777 Warsaw, Poland
We present an introduction to a novel model of an individual and group opinion dynamics, takinginto account different ways in which different sources of information are filtered due to cognitivebiases. The agent based model, using Bayesian updating of the individual belief distribution, isbased on the recent psychology work by Dan Kahan. Open nature of the model allows to study theeffects of both static and time-dependent biases and information processing filters. In particular,the paper compares the effects of two important psychological mechanisms: the confirmation biasand the politically motivated reasoning. Depending on the effectiveness of the information filtering(agent bias), the agents confronted with an objective information source may either reach a consensusbased on the truth, or remain divided despite the evidence. In general, the model might providean understanding into the increasingly polarized modern societies, especially as it allows mixing ofdifferent types of filters: psychological, social, and algorithmic.
Keywords: Opinion change, motivated reasoning, confirmation bias, Bayesian updating, agent based model
I. INTRODUCTION
The actual processes through which individual peopleand groups of people evaluate information and form orchange their opinions are very complex. Psychology of-fers many descriptions of these processes, often includ-ing multiple pre-conditions and influencing factors. Theassumption that opinions form through a truth-seeking,rational reasoning is, unfortunately, not true in mostcases. The list of the recognized cognitive biases thatinfluence our mental processes (rational and emotional)is very long, covering over 175 named entries (Benson[8]). The situation becomes even more complex whenwe try to describe how the individual opinion changescombine to form dynamical social systems. In additionto the problems alluded to above, one has to considerthe multiple forms of social interactions: personal (factto face and, especially in recent years, those mediatedby electronic media) and public (news, comments, ru-mours and other modes of information reaching an in-dividual). These interactions vary with respect to theirinformative and emotional content, trust to the sourceof the information, its pervasiveness and strength andmore. Taking these difficulties into account, the task ofan accurate description of the individual and group opin-ion change dynamics appears insurmountable. Yet, theneed to understand how and why our societies (especiallydemocratic ones) arrive at certain decisions, how and whypeople change their beliefs (or why they remain uncon-vinced in the light of ‘overwhelming evidence’), what arethe mechanisms driving the increasing polarization of oursocieties and how to make people talk to and understandeach other, is so great that despite the challenges, thereis intense research on the topic.For several years, group opinion change has been a fer-tile ground for sociophysics and Agent Based Modelling.The initial works have used many of the tools and ideas ∗ [email protected] developed to describe magnetic phenomena and used theanalogies between atomic spin states and opinions, mag-netic field and external influences to derive statistical de-scriptions of global opinion changes. There are many ap-proaches, for example the voter model [7, 20, 25, 35], theSznajd model [10, 72–74, 82–84, 92], the bounded con-fidence model Deffuant et al. [27, 28], Weisbuch et al.[101], Weisbuch [102], the Hegelsmann-Krause model[42], the social impact model of Nowak-Latané [61, 62]and its further modifications including the role of leaders[43, 48, 49, 76], and many more others. Historically, theinitial focus was on the formation of consensus — treatedas a form of a phase transition — but the later works fo-cused on the role of minorities, with special attentiongiven to the effects of presence of inflexible, extremistindividuals.The literature on numerical models of opinion dynam-ics has grown enormously in the past decade. For a rela-tively recent reviews we point out Castellano [19], Castel-lano et al. [21], Galam [34]. While most of the earlyworks were limited to studies of the models themselves (rather than specific social contexts), showing very inter-esting sociophysical results, but only weak, qualitativecorrespondence to any actual societies (Sobkowicz [75]),the recent years have changed this situation. Availabil-ity of large scale datasets, documenting opinions and in-teractions between people (derived mainly from the In-ternet and social media), has allowed, in principle, toattempt quantitative descriptions of specific opinion evo-lution processes. The number of sociophysical and ABMbased works aimed at quantitative description of real so-cieties remains limited. For example, in the case of polit-ical elections, only a few papers attempt such description(Caruso and Castorina [18], Fonseca and Louca [32], For-tunato and Castellano [33], Galam [36], Palombi and Toti[66], Sobkowicz [81]).Despite the undoubted advances, the sociophysicalmodels of the individual behaviour are still rathercrude . Most of the sociophysical agents and descrip-tions of their individual behaviour are too simplistic, too a r X i v : . [ phy s i c s . s o c - ph ] M a r much ‘spin-like’, and thus unable to capture the intrica-cies of our behaviours. This observation applies also tothe descriptions of the interactions between the agents,or, in more general way, to the way that new informa-tion is treated in the process of adjusting currently heldopinions. Most of the Agent Based Models assume rel-atively simple forms of such interactions, for examplerules which state that if an agent is surrounded by otheragents holding an opinion different than its own, it wouldchange it opinion to conform to the majority. As expe-rience with real life situation shows, such ‘forced’ con-version is rather unlikely among people (in contrast withatomic spins. . . ). The differences between the model be-haviour of spin-persons ( spinsons, Nyczka and Sznajd-Weron [63]) and our understanding of real people haveforced the introduction of special classes of agents, behav-ing in a way that is different from the rest: conformists,anticonformists, contrarians, inflexibles, fanatics... Us-ing appropriate mixtures of ‘normal’ and special agentsit has been possible to make the models reproduce morecomplex types of social behaviour.In the author’s opinion, such artificial division of theagents into separate classes with different, fixed internaldynamics, while improving the models’ range of results, ispsychologically incorrect. In a specific situation any per-son may behave inflexibly or show contrarian behaviour.For this reason, the author has proposed a model in whichopinion change results from a combination of agent’s in-formation and emotional state, coupled with the infor-mative and emotional content of the message processedby the agent (which may originate from an interactionwith another agent or from the media). The model, in-troduced in Sobkowicz [77, 78] has allowed a quantitativedescription of an Internet discussion forum [79] and evento predict the results of recent elections in Poland [81].The model applies however only to situations in whichthe emotional component is very strong, determining theindividual behaviour.One of the most active discussions in psychology ofbelief dynamics is centred around apparently irrationalprocessing of information: the operation of biases, heuris-tic shortcuts and other effects that stand in contrast withthe classical tenets of the rational choice theory. Such os-tensibly irrational behaviours have not received much at-tention within the ABM community so far, despite theirpresence in many social situations. Important examplesare provided by strong opposition to well documentedarguments in cases of climate change, vaccination, en-ergy policies etc. There are well known differences inrisk perception and reactions, leading to strong polariza-tion almost beyond capacity to communicate (Kahneman[52], Opaluch and Segerson [65], Sunstein [88, 89], Sun-stein et al. [90], Sunstein [91], Tversky and Kahneman[96, 97, 98], Tversky et al. [100]). Our current work hasbeen motivated by the recent studies (Kahan [50, 51]),which describe in detail the Politically Motivated Reason-ing Paradigm (PMRP). We aim to create an Agent BasedModel using biased information processing and Bayesian updating. Despite the recognized status of the Bayesianupdating in risk assessment and other areas, it is ratherseldom used by the ABM community. To mention a fewexamples, Suen [87] has considered the effects of informa-tion coarsening (due to the agents’ reliance on specialistsfor the relevant information) and the tendency to choosethe sources which confirm their pre-existing beliefs; Mar-tins [56] has studied the case of continuous opinion modelunder Bayes rules, looking for long term evolution of theopinions; Bullock [16] has studied the conditions in whichpeoples’ beliefs, updated using Bayesian rules could, inthe short term, instead of converging on a true value,diverge or even polarize. Ngampruetikorn and Stephens[59] have analysed the role of confirmation bias in consen-sus formation in a binary opinion model on a dynamicallyevolving network.The flexibility offered by the Bayesian approach allowsmuch greater complexity of the behaviour of the indi-vidual agents, and as such, offers potentially more rele-vant descriptions of social behaviours than the spin-basedmodels. Of course, these benefits do not come withouta price: there are many more degrees of freedom in thesystem, and therefore many more unknowns in properlysetting up the ABM simulations. Still, the importanceof social phenomena observed around the world, in par-ticular various forms and effects of polarization, suggeststhe need for a deeper understanding of the underlyingmechanisms, and makes the effort worthwhile.
A. Confirmation bias vs. PMRP
One of the best recognized biases in information pro-cessing is confirmation bias , defined by the Wikipedia as‘a tendency to search for, interpret, favour, and recall in-formation in a way that confirms one’s pre-existing beliefsor hypotheses’. Such definition stresses that the opera-tion of the confirmation bias may be on various levels:selecting and preferring the information sources, givingdifferent weight to different sources and internal mech-anisms (such as memory preferences is storing/recall ofinformation). When people communicate, the individualconfirmation bias effects may be combined in a way thatcreates group effects such as echo chambers. As a result,even when faced with true information, and agent (or agroup of agents) may form or maintain a false opiniondue to the confirmation bias.The motivated reasoning paradigm considers the waysin which goals, needs and desires influence the informa-tion processing (Jost et al. [46]). These goals may be re-lated to the individual needs, but also to group or globalones, for example the goal of achieving or maintaining theperson’s position within a social group . In such a case, themotivated reasoning may bias the information processingby substituting the goal of truth-seeking by the person’sdesires to affirm the affiliation with the chosen in-group.Seen through the lens of these desires, the apparentlyirrational choices (such as disbelief in well documentedevidence and belief in unproven claims) become rationalagain. We believe and act accordingly in a way that iscongruent with the perceived beliefs and actions of ourpreferred social group. As the goal of the person is shiftedfrom truth seeking to strengthening of the position withinthe social group, the disregard for the truth becomes ra-tional. Especially, when the consequences of a rejectionfrom the group are more immediate and important thanthe results of ‘erroneous’ perception of the world. Kahan[50, 51] has provided a very attractive Bayesian frame-work, allowing not only to describe the role of variousforms of cognitive biases, but also the empirical evidenceof the differing predictions of the different heuristics, suchas confirmation bias or political predispositions. Experi-ments with manipulated ‘evidence’, described by Kahan,are very interesting.While both mechanisms introduced above lead awayfrom the truth-seeking behaviour, their predictions mightdiffer, especially with respect to new information. Whilethe confirmation bias favours evidence in agreement withalready held views (priors), the politically motivated rea-soning selects and favours information congruent withperson’s political identity (defined by the in-group char-acteristics). The confirmation bias depends on internalagent states, while PMR involves perception of externalcharacteristics.The vision of information processing, comparing thetwo forms of bias, described by Kahan is simple enoughto become a framework of an ABM. As we shall argue,the Bayesian filtering approach is very flexible and maybe applied to a variety of situations, contexts and typesof processing bias.
Our present goal is to describesuch framework and to provide simple examplesof the types of information processing leading toconsensus or polarization.
The latter case is of specialimportance, as the current political situation in manydemocratic countries seems to be irrevocably polarized,with large social sections unable to find common groundon many extremely important issues. Our hope is tofind, with the use of the model, any suggestions for theprocesses that may reverse this polarization and enablecommunication across the current divisions.
II. INDIVIDUAL INFORMATION PROCESSINGMODELA. Overview of the model
The current work aims at a general, flexible modelof the individual opinion dynamics . We base ourconcepts on the Bayesian framework. Figure 1 presentsthe basic process flow, modelled after Kahan [50]. Forsimplicity, we shall assume that the belief which we willbe modelling may be described as a single, continuousvariable θ , ranging from -1 to +1 (providing a naturalspace for opinion polarization). The agent holds somebelief on the issue, described at time t by a distribution X ( θ, t ) . For example, if the agent is absolutely sure thatthe ‘right’ value of θ is θ , then the distribution wouldtake the form of Dirac delta function centred at θ . Less‘certain’ agents may have a different form of X ( θ, t ) . Thisdistribution is taken as a prior for a Bayesian update,leading to the opinion at t + 1 . In the simplest case, theBayesien likelihood factor would be provided by the newinformation input S i ( θ ) . Here the index i correspondsto various possible information sources. Kahan has pro-posed that instead of this direct update mechanism (prioropinion+information → posterior opinion), the incominginformation is filtered by the cognitive biases or predis-positions of the agent. The filtering function F ( S i ) trans-forms the ‘raw’ information input into the filtered likeli-hood F L ( S i , θ ) , so that the posterior belief distribution X ( θ, t + 1) is obtained by combining the prior opiniondistribution X ( θ, t ) with the likelihood filter F L ( S i , θ ) .It is important to note that different sources of infor-mation may be filtered in different ways. Trust in thesource, cognitive difficulty of processing the information,its emotional context, the agent’s dominant goals – theyall may influence the ‘shape’ of the filter. Moreover, wehave to consider the ways that the information piecesfrom various sources are treated. Two simple versions areshown in Figures 2 and 3. The first treats each sourceseparately and in a sequential order. Such approach maybe sensible in cases where new information arrives in wellseparated, time ordered units, e.g. daily newspaper edi-tions or TV news programs. The second approach treatsthe sources in an integrative way: it accumulates the fil-tered likelihoods (each of which contains the informationand its specific filter), with some weights, into a singletotal likelihood function. Such approach may be betterwhen various sources of information coexist at the samemoment, e.g. when a group of people discusses the TVnews. The weights associated with the sources could bedifferent for each information processing event, depend-ing on the relative importance and strength of the sourcesand other circumstances. Both approaches can be used(and combined) in the case of advanced models of specificsystems. B. Information sources
The information that influences the beliefs of peoplecomes from multiple types of sources. There are, ofcourse, personal experiences , which may provide highimpact information about specific facts and events, and,with the application of some cognitive processes, abouttrends, estimates, diversity and prognoses. The direct ex-periences may be thought of as direct and therefore trust-worthy, but in many cases we rely on memories, whichmay provide false information. Some other cognitive bi-ases are also relevant for the personal observations: wemay fall for certain illusions, disregard a part of experi-ence and put emphasis on other parts, even to a degreeof actually inventing events that did not take place.
Figure 1. Basic model of information processing. An agent holds a prior belief about an issue, described by a distribution X ( θ, t ) . We assume a simple, one-dimensional ‘opinion parameter’ θ ranging from -1 to 1.The information on the issue, comingfrom the source S i has a distribution S i ( θ ) .This information is filtered by a function F ( S i ) , specific to the information source.The form of the filtering function may vary, depending on the focus of the model. For example, if we assume fully rational,truth-seeking agents, the filter would be centred around the ‘true’ value of the parameter θ . On the other hand, in the caseof the model based on cognitive bias, the filter function would be simply related to the prior beliefs of the agent. In the caseof PMRP, the filter is related to the distribution of beliefs held / approved by the agent’s in-group, or, more precisely, to theagent’s perception of such distribution. Combining the information input with the filter function yields the filtered likelihoodinformation F L ( S i ) . Bayesian update of the agent’s belief X ( t ) via F L ( S i ) leads to the changed, posterior distribution ofbeliefs X ( θ, t + 1) . The second source of the information is related to thegroup of people with whom a person identifies (the in-group). These inputs may come from in-group infor-mation exchanges , either in person or via electronic ortraditional communication media. The latter has becomeincreasingly important during the past decade, especiallyamong the younger population. In addition to the in-teractions with specific individuals in the in-group, thein-group may influence agents beliefs via cumulative indi-cators. These would include the official or semi-officialstatements of the group’s views on specific issues ,but also the unofficial and media information aboutthe group norms, average opinions and trends .The latter are especially interesting, as they may comeboth from within the group and from outside. In suchcase the information about the in-group views and normsmay be manipulated and distorted.The last group of the sources is related to any sourceoutside the in-group. This may include the interactions with people outside one’s own self-identificationgroup and the media perceived as not associated withthe in-group. In case of the media the information is prepared by someone, which includes both the selectionand presentation of the information.The information which we use to fortify or to changeour beliefs may be manipulated ‘at source’. In personalinteractions with other people we may get the wrong im-pressions because people due to many forms of dishonestyor distortion. Traditional sources of news are also subjectto misrepresentation. The ideal of the fair and balancedjournalism – giving comparable attention to all contra-dicting views – may also, at times, be considered manip-ulative, especially when it results in undue attention andcoverage given to a tiny minority of views. A example ofnegative consequences of such ‘balanced’ reporting maybe provided by the case of the anti-vaccination move-ment (Betsch and Sachse [13], Nelson [58], Tafuri et al.[93], Wolfe et al. [105]).
Figure 2. Sequential model of information processing when multiple sources are present. As before, an agent holds a priorbelief about an issue, described by a distribution X ( θ, t ) . We assume a simple, one-dimensional ‘opinion parameter’ θ rangingfrom -1 to 1.The information on the issue, coming from the source S i has a distribution S i ( θ ) . A different distribution S k ( θ ) may come from another source S k . There filtering functions F ( S i ) and F ( S k ) may differ, and as a consequence, the likelihoodfunctions F L ( S i ) and F L ( S k ) would also differ. Bayesian update of the agent’s belief X ( t ) via F L ( S i ) and F L ( S k ) is appliedsequentially, leading to the changed, posterior distribution of beliefs X ( θ, t + 1) and X ( θ, t + 2) . In the case of many sources,their relative importance may be described by the number of times they are present in the chain of evaluations. In reality, however, much more frequent are the ma-nipulations due to unbalanced reporting . The po-larization of both the traditional channels (newspapers,radio, TV) and the Internet sources (WEB versions ofthe traditional channels and independent WEB pages,blogs, Facebook pages and tweets) is a well known phe-nomenon (Adamic and Glance [2], Campante and Hoj-man [17], Jerit and Barabas [45], Lawrence et al. [54],Prior [69], Stroud [86], Wojcieszak et al. [103]). Manypeople rely on a limited number of information sources,the spectrum of the information reaching him/her couldbe heavily distorted. Theis selective attention/selectiveexposure may lead to the echo-chamber phenomenon,where a person sees and hears only the information sup-porting the ‘right’ beliefs.The US presidential election in 2016, with its increas-ing role of social media as information sources, broughtour attention to yet another form of ‘at source’ infor-mation manipulation: fake news . The relative ease tocreate false information, in some cases supported by ma-nipulated images, voice and video recordings, to post itonline and to create a web of self-supporting links allowsthe perpetrator to spread such news. The trust associ-ated with social networks (for example Facebook or twit-ter links) makes spreading of such information faster –especially if the fake news are designed to pass throughthe most common information filters.As examples, we propose three specific forms of the dis-tributions of the source information S ( θ ) , suitable for the ABM approach. The first, S T ( θ ) , corresponds to resultsof social efforts to describe the phenomenon as accu-rately and objectively as possible. Let’s assume thatfor the topic in question (where the beliefs are describedby the parameter θ ) there is some specific value that cor-responds to objectively discoverable optimum, θ T . Thismay correspond to an exact description of a situation,or an universally optimal solution to a problem or anyother situation in which, through rational, communica-tive processes, it is possible to arrive at a ‘true’ value of θ T . This would mean, that eventually, all beliefs otherthan this value should be labelled ‘erroneous’. We as-sume that S T ( θ ) takes a form of a Gaussian distributioncentred around θ T . In the following simulations we shalluse S T ( θ ) as the source.The second possible form of a popular information dis-tribution, a ‘flat’ one S F ( θ ) , assumes that all possible val-ues of θ are represented equally in the information stream(an extreme version of the ‘ fair and balanced ’ news).The third form, S P ( θ ) , is designed to represent parti-san bias in the news stream, taking a form of a sigmoidalfunction, favouring one of the alternatives (for example θ > ). The three forms are shown in Figure 4. C. Types of information filters
The way in which information received from varioussources is evaluated and used to form new beliefs, de-
Figure 3. Integrative model of information processing when multiple sources are present. As before, an agent holds a priorbelief about an issue, described by a distribution X ( θ, t ) . The information on the issue, coming from the sources S i and S j hasa distributions S i ( θ ) and S j ( θ ) . There filtering functions F ( S i ) and F ( S k ) may be different. As a consequence, so would thelikelihood functions F L ( S i ) and F L ( S k ) . Instead of a sequential application, the Bayesian update of the agent’s belief X ( t ) viaa single application of a weighted combination of F L ( S i ) and F L ( S k ) : F L ( T OT ) = W i F L ( S i ) + W k F L ( S k ) , leading to thechanged, posterior distribution of beliefs X ( θ, t + 1) . The weights determine the relative importance of the different informationsources in a single update. pends not only on the sources, but also on the goals ofa person. These goals may allow us to construct rulesthat would create and update the information filters. Insome cases they would be independent of the character-istics of the person, in other cases they would depend onthem, which would make the process of belief modifica-tion self-referential. Below is a partial list of the filtertypes that could be used in our agent based modelling.The filters are distinguished by their origin (internal tothe person or external), dependence on some objectivelymeasurable characteristics, possibility of an orchestratedmanipulation and, finally, normative value. • Truth seeking filter . It corresponds, on an indi-vidual level, to the objective source of information.The truth seeking filter could take a form of a dis-tribution localized around θ T , so that the eventual,repeated application of the information processingwould lead the agents to converge their beliefs on θ T . An example of such filter would be a narrowGaussian distribution centred on θ T . This type of the filter is at the core of the ‘rational discourse’and ‘objective reality’ assumptions, and while im-portant from the philosophical and moral stand-points it seems to be an exception rather than arule in social life. Because the value of θ T is in-dependent of the agents, we categorize the truthseeking filter into the ‘external’ category. And be-cause the discovery of the value is assumed to relyon well established processes (for example depend-ing on scientific methods), we also assume that thetruth seeking filter in its pure form is not liable tomanipulation. Applied to the flat (balanced) in-formation S F ( θ ) the truth seeking filter would cre-ate a filtered information resembling the objective,truth-related source S T ( θ ) . • Confirmation bias filter . The tendency to givemore weight to information supporting a person’scurrent views and to disregard sources disconfirm-ing these views is well known in psychology. Suchfilter is relatively easy to be introduced into the
Figure 4. Graphical representation of examples of informationsources. S F ( θ ) – the flat distribution representing an extremeform of ‘balanced’ news. S T ( θ ) – an example of a distributionfocusing on a ‘true’ value accepted by the whole community.In this example, the true value is set at θ = 0 . and theinformation distribution has a rather broad shape. Lastly, S P ( θ ) represents partisan bias, in this case favouring positive θ values at 6:1 ratio. ABM framework: the filtering function F CB ( S ( θ )) could be defined in terms of the current belief X j ( θ, t ) , either directly or slightly modified, forexample via some form of broadening or narrow-ing of tolerance range for differing beliefs. A verywidely known example of the use of confirmationbias in AMB environment is the bounded confi-dence model (Deffuant et al. [28], Hegselmann andKrause [42], Weisbuch [102]). In the model an agentinteracts only with other agents who have an opin-ion sufficiently close to its own, disregarding agentswith opinions separated by more than a certainthreshold (cid:15) . The confirmation bias filter belongsto the internal category, and as such, it can not bemanipulated directly by outside sources. A possi-bility exists, however, that the manipulation wouldact on the importance or tolerance of the filter inevaluating certain sources of information. • Memory priming/availability filter . It is an-other example of an internal filter, which, however,is much more easily manipulated than the confir-mation bias filter. This is because the confirma-tion bias compares the new information with cur-rently held beliefs, which may be quite deeply in-grained, especially if they depend on moral founda-tions (Haidt [38, 39, 40], Jost et al. [47]). In con-trast, the availability filter acts via additional at-tention given to facts that are quickly accessible toour minds. Thanks to various forms of priming, itseffects may be effectively stimulated and steered byoutside influence: our peers or the media (Sunstein[89], Tversky and Kahneman [96, 99]). In termsof an ABM approach, such filter could be approxi-mated, for example, by the shape of the previouslyencountered information source. • Politically Motivated Reasoning (PMR) fil-ter . The notion of the PMR filter advocated byKahan [50], is based on an assumption of a perfectlyrational behaviour – but with a re-defined personalgoals. Instead of the focus on the exact descrip-tion driving the truth-seeking filter, the rationalityof a person’s actions is judged by their usefulnessfor the goal of preserving or improving the positionwithin a specific social group (the in-group ). Insuch case, the dominant processes would be thosewhich facilitate alignment with the in-group ac-ceptance criteria, which often include expressionof specific beliefs. Thus the PMR filter would bebased on the perceived in-group opinions. Assuch, the PMR filter is an example of an externalfilter, that is, in itself, based on some informationsource, rather than on the internal characteristicsof the agent. For example it could be a Gaussiandistribution centred at the average belief of the in-group. It is worth noting that in some cases itcan be manipulated . The range of such manipu-lation depends on a specific social context. Even inthe cases when the knowledge about the in-groupbeliefs comes from direct interactions between themembers of the in-group, some external social pres-sures might limit the expression of these beliefs. Forexample, political correctness might prevent overtexpressions of some opinions, leading to a departureof the perceived average value from the ‘true’ av-erage of internally held, but not expressed, beliefs.Another possibility of manipulation is when the in-formation about a large in-group (such as politicalparty support base) is mostly available via someexternal media: press, TV, social networks... Themedium may withhold some information, enhancesome other and in such way distort the perceivedin-group opinions and thus manipulate the agent’sPMR filter. • Simplicity/attention limit filter . This is an in-ternal filter, related to the culturally and technolog-ically driven change in the way external informationis processed. Due to the information deluge, thereis an increasing dominance of short forms of com-munication, especially in the Internet based me-dia: WEB pages, Internet discussions, social media(Djamasbi et al. [29, 30]). The simplification (oroversimplification) of important issues, necessary tofit them to the short communication modes, mayact against beliefs that are not disposed to suchsimplification. This part of the filter acts at thecreation side of the information flow. Decreasingattention span and capacity to process longer, ar-gumentative texts act as another form of filter, thistime at the reception end of the flow. There are nu-merous forms of psychological bias related to andleading to such filtering, from venerable and ac-cepted heuristics (like Occam’s razor), through thelaw of triviality and bike-shed syndrome (Parkinson[68]), to a total disregard for too complex view-points (Qiu et al. [70]). Together, these tenden-cies can create a filter favouring the informationthat is easily expressed in a short, catchy, memo-rizable form. There is no simple universal form ofthe filter for the ABM approach, because in differ-ent contexts different beliefs might be easier to beexpressed in the most simple way. • Emotional filter . Some topics, contexts and com-munication forms may depend, in their process-ing, on the affective or emotional content. Thismay create a processing filter, for example onethat favours extreme views, as they are typi-cally more emotional than the consensus oriented,middle-of-the-road ones. Emotionally loaded infor-mation elicits stronger response and longer last-ing effects (Allen et al. [4], Barsade [6], Bergerand Milkman [9], Bosse et al. [15], Chmiel et al.[22, 23], Clore and Huntsinger [24], Haidt [37], Hat-field et al. [41], Nielek et al. [60], Reifen Tagaret al. [71], Sobkowicz [77], Sobkowicz and Sobkow-icz [80], Thagard and Findlay [95]). The specificform of the filter depends on the mapping of thebelief range and the associated emotional values.Furthermore, the emotional filter may depend onthe current agent belief function, e.g. anger di-rected at information contrary to the currently heldbeliefs, or at a person who acts as the source of theinformation. • Algorithmic filters . An increasing part of theinformation reaching us comes from the Internetservices such as our own social media accounts, per-sonalized search profiles etc. The service providersorganize and filter the content that reaches us of-ten without our knowledge that any filter exists;and even more often without the knowledge howit works. These external algorithmic filters, shap-ing our perception, not only skew the opinions but,more importantly, they often limit the range of top-ics we are aware of and the opinions related to them(Albanie et al. [3], Pariser [67]). In some cases theeffect of an algorithmic filter is similar to the inter-nal confirmation bias (e.g. the search engine pri-oritizes the results based on the already recognizedpreferences of the user). In other cases, the ma-chine filter may deliberately steer the user awayfrom certain information, based on decisions unre-lated to the particular user, fulfilling the goals ofsome other party.
D. The filtering process
An interesting question, important for the practicalABM implementation, is: how should the various fil-ters be applied to obtain the relevant filtering function F ( S i , θ ) ? A sequential application of filters focuses on theparts of the information that are minimized or deletedby each filter. In contrast, parallel application focuses onthe information that is allowed by each of the filters. Inreality some of the filters are applied sequentially, thatis a person considers only the information that conformsto all of these filters (e.g. it must be highly emotional and in agreement with person’s views). In other cases,some filters may be added, for example a person mayaccept the information that confirms his/her views (theconfirmation bias) or the information that agrees with heperceived views of the in-group. As a result, the overallshape of the filter function may become quite complex.Moreover, we should remember that even if we treat someexternal filters as relatively stable, the ones associatedwith person’s own views or with the in-group compliancemay evolve in time.The Bayesian-like form of the filtering process, multi-plying the incoming information S i ( θ ) by the filter func-tion F ( θ ) is very efficient: a single process may decisivelychange the shape of the information distribution. Forthis reason we introduce here a process control parame-ter, the filtering efficiency f . Its role is to determine therelative strength of the influence of the specific filteringfunction on the incoming information. In particular, the effective filter function is assumed to take the form f F ( θ ) + (1 − f ) U , where U is a uniform function. E. Information processing and memory effects
The Bayesian processing of information may lead tovery quick and dramatic revisions of the individual be-liefs. Some shapes of the likelihood distributions (espe-cially those with a narrow maximum) transform prior be-lief distributions into completely different posterior. Yet,with some exceptions, our interactions with other peopleor with the media sources are rarely so transformative.Martins [56] has proposed a modification of the origi-nal Bayesian rules, relaxing the transformation speed.He has proposed that only a fraction p of encounterswith the information sources leads to informative pro-cessing, characterized by the likelihood function F L ( S i ) .In the remaining − p cases, Martins has considered thatthe source is treated as uninformative (characterized bya uniform distribution), and the resulting informationsource is a mixture of the two, in a way similar to ourtreatment of the filtering efficiency.In our approach, in the remaining − p cases, the en-counter is ignored and the information is not pro-cessed . The simplest approach to describe such situa-tion would be to leave the belief distribution unchanged, X j ( θ, t ) = X j ( θ, t + 1) , which may be treated as thecase of the agent’s perfect memory . However, as weshall show in the next section, repeated application theBayesian updates leads to a narrowing of the agents’ be-lief distributions. Eventually, the individual beliefs wouldbecome more and more focused, which influences the Figure 5. Graphical representation of selected filtering types and their sources. The mix of various forms of filtering maydepend on the source of the information being evaluated, for example in certain situations the personal, truth-seeking filtermight be dominant, while in other situations the focus on the in-group acceptance would favour the PMR filter. When mediaor the Internet are the source of the information, the personal filters may be modified by the effects of personalized filteringby the search/presentation algorithms of the content providers or intentional modifications in the news industry. To allow thepossibility of imperfect application of the filters, in the final stage the ‘pure’ filtered likelihood may be combined with non-specific, uniform function U , via the filter effectiveness factor f . The resulting function would have the form of fF + (1 − f ) U . whole system dynamics. For this reason we will intro-duce an imperfect memory mechanism that restoressome level of an individual belief indeterminacy , inwhich the agent reverts partially to its intrinsic value ofthe standard deviation of the X ( θ, t ) distribution. Thisis described as follows: in the case of ignoring the infor-mation event (probability − p ) the agent’s belief distri-bution does not remain unchanged but becomes X j ( θ, t + 1) = mX j ( θ, t ) + (1 − m ) N ( (cid:104) θ (cid:105) j ( t ) , σ j ) , (1)where the memory fidelity parameter ≤ m ≤ de-scribes the ratio of preserving the current distributionintact, and N ( (cid:104) θ (cid:105) j ( t ) , σ j ( t )) is a Gaussian distributioncentred at the current average belief of the agent (cid:104) θ (cid:105) j ( t ) ,but characterized by a fixed standard deviation σ j , char-0 Figure 6. Examples of filtering mechanisms. Top panel: three‘pure’ filters (with effectiveness f = 1 ). CONF (violet line):confirmation bias taken as an example of an individual agent’sbelief, in this case a rather extreme leftist. PMR (brownline): Politically Motivated Reasoning filter, is calculated asand average belief of the agent’s in-group (the leftists, in thiscase). EMO (red line): an example of a simple emotionalfilter, favouring extreme views. Bottom panel: effects of aan imperfect filtering, in which the effectiveness factor f isassumed to be equal to 0.3 In such case, the resulting filterfunction is given by fF +(1 − f ) U , where F is the ‘pure’ filterand U is a uniform function. acteristic for each agent. Thus for the perfect memory( m = 1 ) we recover the unchanged distribution condi-tion and for m = 0 , an agent ‘left to itself’ preserves thecurrent average value of the belief, but resets the indeter-minacy of its beliefs to σ j . The information processingis graphically presented in Figure 7.The results of resetting the indeterminacy of an in-dividual agent belief distribution is shown in Figure 8.While the ‘broadening’ admixture may seem compara-tively small (at least for the depicted m = 0 . value), itplays an important role in shaping the evolution of thebeliefs of the agents under the influence of informationsources. The origin of such reset of the indeterminacymay be explained by numerous encounters with a rangeof beliefs, other than the main source considered in thesimulations, which are too weak to significantly shift theagent’s average opinion, but introduce some degree ofuncertainty. Figure 7. Details of the information processing. The likeli-hood function
F L ( S i ( θ ) , derived from the information source S i is applied only in case of ‘informative’ encounter, withprobability p . In such case X j ( θ, t + 1) = X j ( θ, t ) F L ( S i ( θ ) In the remaining cases, the encounter is ignored. Withoutprocessing the new information the agent’s belief functionmay remain unchanged, or it may become somewhat relaxed,by addition of a Gaussian function N ( (cid:104) θ (cid:105) j ( t ) , σ j ) , centredat the current average θ for the agent, and characterizedby the standard deviation equal to the starting value σ j .Depending on the value of the memory parameter m theposterior belief of the agent becomes then X j ( θ, t + 1) = mX j ( θ, t ) + (1 − m ) N ( (cid:104) θ (cid:105) j ( t ) , σ j ) . III. BASIC SIMULATION ASSUMPTIONS
In actual situations both the information sources andthe filters described in the previous section combine theireffects in quite complex ways. We encounter, in no par-ticular order, information sources of various type, con-tent and strength, in some cases acting alone, in others- combined. To elucidate the model effects we shall ini-tially focus on drastically simplified systems, in whichwe would show the effects of the repeated applicationof the same filter to the same information sourcedistribution S i ( θ ) , for a range of starting belief distri-butions X j ( θ, (where the index j denotes individualagents). The aim of this exercise is to show if particularfilters (no filter, confirmation bias, PMR) lead to stablebelief distributions, polarization, emotional involvementetc.As noted, for the simulations shown in this paper, weshall be using the truth-related form of the informationsource, S T ( θ ) , assumed to take a rather broad Gaussianform, centred at θ T = 0 . and with standard deviationequal to 0.4. This choice of the information source dis-tribution is motivated by two reasons. The first is tocheck if the simulated society is capable of reaching con-sensus when the information source points to a well de-fined value. The second reason was to study the effectsof the asymmetry. Obviously, it is much easier for the1 Figure 8. Example of the indeterminacy reset (memory fac-tor) of an individual agent i belief distribution. The current(already quite narrow) belief distribution (thin black line),centred at (cid:104) θ (cid:105) i ( t ) is mixed with a normal distribution centredat the same value, but with the standard deviation σ j charac-teristic for the agent (blue line shows the original distribution,which is centred at (cid:104) θ (cid:105) j (0) ). The resulting distribution (redline) exhibits the central peak but also some ‘tails’ that allowthe agent to accept beliefs further from the centre of its belieffunction. agents whose initial opinion distribution favours positive θ values to ‘agree’ with an information source favouringa positive θ T . In contrast, the agents starting with beliefdistributions preferring negative θ values, would have to‘learn’ and to overcome their initial disagreement and tosignificantly change their beliefs.Each of the agents is initially characterized by a belieffunction of a Gaussian form (bounded between − and+1 and suitably normalized). The standard deviationparameters for the agents σ j are drawn from an uniformrandom distribution limited between 0.05 and 0.2. Three separate sets of agents are created andused in the simulations: leftists, centrists andrightists (we note here that these names have no con-nection with the real world political stances and referto the position on the abstract θ axis ). Each agentcommunity is composed of N agents (in the simulationswe use N = 1000 ). The leftists have their initial Gaus-sian centre values (cid:104) θ (cid:105) j (0) drawn from an uniform randomdistribution bounded between − and − . . The centristgroup is formed by agents with (cid:104) θ (cid:105) j (0) drawn from be-tween − . and . , and the rightists have (cid:104) θ (cid:105) j (0) drawnfrom between . and 1. Figure 9 shows examples of theagent belief functions for agents from each of the threegroups (thin blue, gray and green lines) and the initialensemble averaged belief distributions X G ( θ, , where G stands for L (leftists), C (centrists) and R (right- Figure 9. Initial belief distributions of three classes of agents.Blue: leftists (maximum of the initial belief drawn randomlyfrom between -1 and -0.5). Black: centrists (maximum beliefdrawn from between -0.1 and 0.5). Green: rightists (maxi-mum belief between 0.5 and 1). Thin, light lines: examples ofthe belief distributions of a few individual agents, differing intheir initial centre of the belief θ j and the width of the beliefdistribution σ j . Thick lines: averaged distributions for eachgroup. ists). These ensemble averages are shown by thick lines.There is some overlap between the leftist/centrist andcentrist/rightist groups, but practically no overlap be-tween the leftist/rightist groups.The simulations discrete time steps. The time is mea-sured in time units in which each agent in the currentgroup has had a single chance to interact with the infor-mation source or to ignore it (with the respective proba-bilities of p and − p ). As we shall show in the followingsections some effects become visible after just a few timesteps, but some other become important after thousandsor tens of thousands events. From the point of view ofthe possible application of model results to the real lifephenomena we should consider the mapping of the ‘sim-ulation time’ to real hours, days or weeks. In the currentwork we focus on the long term behaviour of the system,especially on the final stable conditions. IV. MODEL RESULTSA. Case 1: Unfiltered effects of true information
We shall start the description of the model results witha relatively simple case, with the aim of showing the ef-2
Figure 10.
No filtering applied, perfect memory.
Timeevolution of the averages of beliefs of the three groups ofagents. Rightists: green, centrists: black, leftists: blue. Thinline show evolution of the average belief (cid:104) θ (cid:105) j ( t ) for individualagents j . Thick lines show the evolution of the ensemble av-erages for each group of the agents (cid:104) θ (cid:105) G ( t ) . Without filtering,the truth-focused information eventually leads all the agentsto adopt the θ T centred beliefs. The process for our choice of θ T = 0 . is, of course, the easiest for the rightists, who startwith beliefs close to this value. However, eventually all thegroups achieve consensus. The case of the leftists (initiallyholding views opposing θ T ) is quite revealing: agents withinitially large tolerances (large initial σ j ) accept the truthfulinformation quickly; agents with very focused initial opinions(small values of σ j ) hold on for longer times and then veryquickly join the majority. Simulations that use the p = 0 . value show the periods in which the individual opinions re-main unchanged (indicated by the flat segments of the thinlines). fects of some of the simulation parameters. The first caseis based on unfiltered processing of the ‘truth-related’information, S T ( θ ) , as shown in Figure 4. This is, aswe have noted, equivalent to the situation where the in-formation flow is nonspecific (uniform) but the agentsemploy a truth-seeking filter of the same form as S T ( θ ) .As the θ T value is positive (equal to 0.6), the most inter-esting question is how such information would influencethe agents who initially hold the opposite views (the ‘left-ists’).The speed with which the agents converge at the truevalue consensus depends on the significant informationprocessing probability p . Figure 10 presents the timeevolution of individual agents’ average beliefs ( (cid:104) θ (cid:105) j , thinlines) and the ensemble averages (cid:104) θ (cid:105) G for the three agentgroups. We start with agents characterized by perfectmemory ( m = 1 ). The time evolution of the average (cid:104) θ (cid:105) j for p < looks qualitatively different than in thecase of p = 1 . They exhibit a step-like structure, due to‘freezing’ of beliefs if no processing takes place. However, Figure 11.
No filtering applied, perfect memory.
Evo-lution of the average mean value of the belief (cid:104) θ (cid:105) L ( t ) over thegroup of ‘leftists’ due to ‘truth-related’ information stream.Time is measured in interactions per agent. Black-red curvesrepresent various values of the p parameter value. Decreasingthe probability of information-carrying encounters (smaller p )makes the evolution of the beliefs slower. The pure Bayesianevolution ( p = 1 ) very quickly (in less than a 1000 time steps)leads to the distribution of beliefs centred around θ T . For p < the evolution of (cid:104) θ (cid:105) L ( t ) is stretched in time proportion-ally to p . Rescaling time to t (cid:48) = pt shows the invariant shapeof the evolution (inset). the ensemble averages are quite similar for p < and p =1 . Figure 11 shows the dependence of the time evolutionof the average belief for the whole leftist group (cid:104) θ (cid:105) L onthe value of the parameter p (note the logarithmic scale ofthe time axis). In fact, a simple rescaling of the time axisto t (cid:48) = pt (shown in the inset) shows that the evolution isreally a simple slowdown due to inactivity periods, whenno information is processed. Thus for the perfect memory(i.e. for m = 1 ), the role of p is rather trivial. It becomesmore important when the ‘idle’ times are used to partiallyreset the individual uncertainty.The truth-focused information flow eventually con-vinces all the agents to believe in the ‘true’ value of θ T = 0 . , regardless of their initial positions. The pro-cess is the fastest for agents with relatively broadmindedbeliefs (high σ j ). For the agents with initial very narrowbelief distributions the transition is shifted to later times– but then it is almost instantaneous (typical for Bayesianupdate of single valued probabilities rather than distribu-tions). The changes in the form of the belief distributionconsist of a more or less gradual ‘flow’ of beliefs from theoriginal form to a belief centred around the maximumof S T ( θ ) . This is well illustrated by Figure 12, which3 Figure 12.
No filtering applied, perfect memory.
Timeevolution of the ensemble averaged belief distribution overthe groups of leftists and rightists due to ‘truth-related’ infor-mation stream. Time is measured in interactions per agent.Black-blue and black-green curves: averaged belief ¯ X G ( θ ) .Red curve shows, for comparison, the original informationdistribution S T ( θ ) . Simulations use p = 1 value. As the timepasses, the beliefs transform from the starting polarized dis-tributions (thick black lines) and converge to the ‘true’ valueof θ T = 0 . . presents the time evolution of the ensemble average be-lief for the leftist and the rightist groups.To understand the effects of the memory parameter m , it is illustrative to study the effects of the indeter-minacy reset on the evolution of the individual opiniondistributions X j ( θ, t ) . As shown in Figures 13 and 14 therelaxation of the indeterminacy introduced by imperfectmemory factor m < leads to a qualitatively different fi-nal form of the individual belief distributions. Instead ofa set of narrow, delta-like functions grouped close to the θ T value typical for m = 1 , the existence of belief relax-ation leads to distributions of width comparable to theoriginal values of σ j centred exactly at θ T . Thus, whilethe final ensemble average may be similar, the underlyingstructure of the individual beliefs is quite different.In the case of lack of filtering, the effects of the inde-terminacy reset on group averages are rather subtle. Fora given value of p , decreasing the memory factor leadsto a small, but observable shortening of the time scale ofreaching the truth-based consensus (Figure 15). It is in-teresting that even a very small admixture of uncertaintyreset ( m = 0 . instead of m = 1 ) significantly influencesthe evolution of the group averaged belief (cid:104) θ (cid:105) L . B. Case 2: Individual confirmation bias filter oftrue information, perfect memory ( m = 1 ). In the previous section we have shown that under theinfluence of the truth-related information, without filter-ing, all the agents eventually converge their beliefs on thetrue value suggested by the information source. This isnot surprising, as the process is a simple, repetitive, ap-plication of a Bayesian belief modification. We turn nowto the issue of the effects of filtering of the informationsources.We shall start with the individual agent based con-firmation bias filter . There are two reasons for thischoice. The first is that confirmation bias is widely rec-ognized in psychological literature, so it ‘deserves’ a thor-ough treatment in the ABM framework. The second rea-son is a relative simplicity of the filter effects. Supposethat the information flow on which the filter acts is non-specific (i.e. uniform). If the initial belief distribution isgiven by a Gaussian function with the standard devia-tion σ , then the application of the same function actingas the likelihood filter would lead to the posterior beliefin the Gaussian form, but with σ decreased by a factorof √ . A repeated information processing would eventu-ally lead to a Dirac delta-like belief distribution. In otherwords, repeated application of confirmation bias narrowsand freezes one’s own opinions. Once should, therefore,expect that the confirmation bias filter should diminishthe effects of specific information sources, such as thetruth-related source S T ( θ ) .The simulation setup for the case of confirmation biasfiltering of true information is relatively simple. At everytime step, with probability p each agent uses its currentbelief X j ( t ) as the pure filter. In this case the final like-lihood function is defined by F L ( θ ) = ( f X j ( θ, t ) + (1 − f ) U ) S T ( θ ) , (2)where we use the filter effectiveness f as parameter. Asbefore, with probability − p , the agent does not processthe information. In this section we focus on situations m = 1 , when the agent simply retains its previous belief.In what follows, we use a fixed value p = 0 . . The crucialparameter in Case 2 is the filter effectiveness f .Figure 16 presents four snapshots of the evolution ofthe individual belief distributions X j ( θ ) for the leftistgroup. The individual distributions change under thecompeting influences of the confirmation bias filter (pro-gressively narrowing the belief distributions) and the in-formation (shifting the beliefs towards higher values of θ ).The relative importance of these two factors dependson the value of the filtering effectiveness factor f . If apure form of the filter is used ( f = 1 ) the individualbeliefs coalesce to delta form in less than 10 time stepsand the average beliefs of the three groups remain practi-cally unchanged (left panel in Figure 17). Thus, despitethe availability of true information, the centrist and left-ist groups keep their beliefs. For much smaller, but still4 Figure 13.
No filtering applied, perfect memory.
Snapshots of examples of the individual agent belief distributionfunctions for the leftist agents under the influence of the unfiltered S T ( θ ) information source. For the simulations p = 0 . and m = 1 (that is, perfect memory is assumed and no indeterminacy of beliefs occurs). As expected, the individual beliefs movetowards the true value of θ T = 0 . , at the same time becoming increasingly narrow. There is only a partial overlap of theindividual opinions. The snapshots are taken at t = 5 , , and (shown in clockwise order). non-negligible value of f = 0 . (corresponding to Fig-ure 16), we observe some change (more pronounced forthe leftist group, where the dissonance between the ini-tial views and the true information is the largest, middlepanel in Figure 17).It is only for very small values of f that the final dis- tributions of beliefs begin to converge towards the truth-related consensus. Even for f = 0 . there is a sizeablegap between the rightists and the centrists and leftists.Figure 18 presents the shape of the ensemble averagedbelief distributions for each of the three groups at a verylate time t = 10000 .5 Figure 14.
No filtering applied, reset of belief indeterminacy.
Snapshots of examples of the individual agent beliefdistribution functions for the leftist agents under the influence of the unfiltered S T ( θ ) information source and with the indeter-minacy reset present. For the simulations p = 0 . and m = 0 . . In this case, not only do the individual beliefs move towardsthe true value of θ T = 0 . , but they also become almost fully overlapping. The snapshots are taken at t = 5 , , and (shown in clockwise order). Figure 19 presents the dependence of the ensemble av-eraged values of the average belief for each of the threegroups on the filtering effectiveness f . For f close to1, the truth-related information is almost totally filteredout by the confirmation bias, the agents quickly evolveto fixed, delta-like belief distributions. For medium val- ues ( . < f < . ) the rightists and the centrists showno effects, but the leftists are gradually ‘convinced’ toshift their opinions somewhat towards positive value. Forsmall values of the filtering effectiveness ( f < . ) theopinions of the three groups begin to converge, but get-ting close to consensus requires very small values of f (on6 Figure 15.
No filtering applied, overview of memoryeffects.
Time evolution of the leftist group ensemble average (cid:104) θ (cid:105) L ( t ) for p = 0 . and various values of the memory factor m . Even a relatively small memory loss ( m = 0 . ) speeds upthe transition of (cid:104) θ (cid:105) L to the true value θ T . In other words,agents with belief distributions which are (however seldom)reset to more ‘broadminded’ values are more likely to learnfrom the ‘truth-related’ information source. the order of 0.02 or less). C. Case 2a: Individual confirmation bias filter oftrue information, reset of beliefs due to imperfectmemory ( m < ). The confirmation bias filter very quickly leads to ex-treme narrowing of the individual belief distributions (forthe fully effective case f = 1 this happens after a few tensof interactions). This suggests that the inclusion of thebroadening mechanisms might have more significant ef-fect than in the case of unfiltered information processing.Indeed, setting m = 0 . changes the evolution of the indi-vidual beliefs dramatically, as we can see from Figure 20(which corresponds to the ‘perfect memory’, m = 1 casein Figure 16). In situation when the beliefs are affectedby the indeterminacy reset (which, we remind, does notchange the current individual average belief), they aremuch more modifiable by the information source.In the case of the S T ( θ ) information source, the effectsof the memory imperfection are most clearly seen by thebehaviour of the leftist group, because this group is thefurthest from the ‘true’ value θ T . The change introducedby the indeterminacy reset is best visualized when welook at the time evolution of the group ensemble averagebeliefs (cid:104) θ (cid:105) L (Figure 21). The presence of the indeter-minacy reset due to imperfect memory causes the indi-vidual opinion distributions to retain some component ofbroader beliefs and facilitates their shift due to the infor-mation source influence. As the process of reaching theconsensus looks similar for all groups, this would lead toa global consensus centred at the θ T value. The process (cid:104) θ (cid:105) L ( t ) → θ T is quite fast, on the order of a few hundredsof time steps when m ≤ . , but slows down for highervalues of m . Above m ≈ . (i.e. for an almost perfect memory) the narrowing of the individual opinion distri-butions dominates and the group average remain closeto their initial values. In plain words, when the agentsare allowed to become extremely close-minded due to theconfirmation bias, the truth-related information has lim-ited effect, and the initial assumed polarization betweenthe leftists, centrists and rightists remains unchanged.The transition between the polarized state at largeenough m and the consensus, for smaller m values, israther abrupt. Figure 22 presents the dependence of the (cid:104) θ (cid:105) L ( t ) values on m , for two values of the filter effective-ness f = 1 and 0.5 for three time snapshots, t = 1000 ,10000 and 50000. Increasing the time leads to a step-liketransition between conditions preserving the polarizationand those leading to the consensus.We recall here the brief discussion of the topic of map-ping the simulation time to the real world units. Obvi-ously, if we consider as events the cases when a personencounters really new, significant information (e.g. lis-tens to a candidate speech at a rally, or a debate, or readsan important article in the press), then 50000 events isobviously not realistic. Even a few hundred events (nec-essary to reach consensus for very imperfect memory, m ≈ ) may be questioned. On the other hand, if wetreat the time ‘between the events’ - essentially the verytime in which the memory imperfection and uncertaintyreset would be expected to occur as single entities or,perhaps, a multitude of them. A partial answer could beprovided by psychological research devoted to the issueof the existence of the indeterminacy of opinion resetsand the associated conditions. D. Case 3: Politically Motivated Reasoning filter
In contrast with the confirmation bias, the PMR fil-ter is assumed to depend on the current beliefs of thein-group, treated as a whole. In the simplest version,we assume that any agent knows perfectly the ensembleaveraged belief distribution of its in-group X G ( θ, t ) , anduses it as a filter for information processing. The filteris dynamical, because as the individual agents changetheir beliefs, so does the average for the group. As in theprevious sections we focus on the truth-related informa-tion source S T ( θ ) and assume that p = 0 . . Our focusis, therefore, the role of the filter effectiveness f in theevolution of the group belief distributions. The currentsection considers the case of agents with perfect memory( m = 1 ).We shall start with Figure 23, which corresponds di-rectly to the results for the confirmation bias filter (Fig-ure 19). For very small values of f the averaged beliefsconverge on the true value, as the information source‘gets through’, thanks to the uniform part of the filter.On the other hand, for f ≈ , the PMR filtering mecha-nism effectively freezes the group opinions. For the twogroups which are initially closer to the true opinion θ T ,namely the rightists (cid:104) θ (cid:105) R and centrists (cid:104) θ (cid:105) C , the fixed7 Figure 16.
Confirmation bias filtering, perfect memory.
Time evolution of individual belief distributions of selectedleftist agents using confirmation bias filtering of truth-related information for f = 0 . , p = 0 . . The distributions are for asubset of randomly selected leftist agents. As the time progresses (clockwise) the individual belief distributions shift to higher θ values and become increasingly narrow. The first process is the influence of the new information favouring θ T = 0 . , thesecond is the result of the confirmation bias. Quite quickly (much faster than in the case of unfiltered information processing),all agents evolve to delta-like belief distributions (the t = 80 panel). For f values greater than 0.05 the processes of narrowingof the individual beliefs dominates over their shift towards the true value for a large number of the agents. value remains unchanged as we lower f , and for verysmall values of f it changes gradually, resembling the be-haviour for the confirmation bias filter. For the leftists,however, instead of a continuous change observed in the confirmation bias case we observe a discontinuous tran-sition at certain value f crit = 0 . (for the current setagents and p = 0 . ).To understand this discontinuity we have to look into8 Figure 17.
Confirmation bias filtering, perfect memory.
Time evolution of average beliefs (cid:104) θ (cid:105) j (thin lines) and theaverages (cid:104) θ (cid:105) (thick lines) for the three groups of agents using confirmation bias, for three values of the filter effectiveness f = 1 , . and . . The value of the information processing probability is p = 0 . . Decreasing the effectiveness of theconfirmation bias filter delays the time at which the individual opinion distributions become fixed and delta-like, shown in thefigure as think horizontal lines. In some cases we observe jumps in the opinion, typical for discrete Bayesian updates. the details of the evolution of the individual belief dis-tributions. The two following Figures (24 and 25) showexamples of time snapshots of the individual belief distri-butions X j ( θ, t ) , collected for f just above the transitionvalue ( f = 0 . ) and below it ( f = 0 . ). The startingpoint is the same in the two cases. The initial evolution( t < ) is driven by the interplay of the asymmetry ofthe information source (favouring positive values of θ )and the PMR filter. It leads to formation of two attrac-tors, around which the individual agents group: one closeto the upper end of the original leftist domain (around θ = − . ) and the second, corresponding to partially‘convinced’ agents, located around θ = 0 . . The decreaseof the filter effectiveness f increases the number of agentsin the latter group. Because the ensemble averaged be-lief distribution enters the process for the next iteration,for f < . , a positive feedback mechanism leads tothe eventual dominance of the convinced group. On theother hand, for f > . the size of the convinced groupis too small to persist, and eventually all agents retain orrevert to their leftist stance.The results for the Politically Motivated Reasoning fil-ter were obtained using an assumption that the compo-sition of the group to which an agent looks for the beliefguidance remains unchanged. The simulations assumethat each agent considers the whole group, defined in theinitial input files, to calculate the ensemble averaged be-lief distribution X G ( θ, t ) , which would be used as the fil- ter. This leads to the case when the more flexible agents,who have shifted their opinion can eventually pull thewhole group with them (for small enough f values).Such assumption might be criticized from a sociologi-cal point of view. In a situation, such as that depictedin t = 50 panels of Figures 24 and 25, where the beliefsystems of the agents flexible and inflexible agents havevery little overlap, one could expect that each of the sub-groups would restrict their PMR filter to the groupof the currently like-minded agents . In other words,the flexibles, who have moved away from the initial groupaverage, would be rejected by the less flexible agents, astraitors of the cause, and disregarded when calculatingthe PMR filter. The obvious result would be a split ofthe initial group, occurring within just a few filtered it-erations (somewhere between t = 25 and t = 50 ). Insuch approach it would be useful to change the simula-tion measurements from the group averages of belief (cid:104) θ (cid:105) G to the numbers of the inflexibles, unconvinced by the in-formation, and the agents who have shifted their beliefs.Such a dynamical group composition model variant shallbe the topic of later works.9 Figure 18.
Confirmation bias filtering, perfect memory.
Averaged distributions of agent beliefs in the three groups.Thick, smooth lines: initial distributions, thin lines: distributions after 10000 steps. Results for three f values are shown f = 1 , f = 0 . and f = 0 . .Figure 19. Confirmation bias filtering, perfect memory.
Dependence of the final value of (cid:104) θ (cid:105) G for the three groups asfunctions of filtering effectiveness f for the confirmation biasfilter. Note that the convergence of opinions near the truevalue requires very weak filtering ( f < . ). E. Case 3a: PMR filter with imperfect memory( m < ) The discontinuous change in the system behaviour, de-scribed in the previous section, results from the extremenarrowing of the individual belief distributions, due tothe repeated application of the filter. Guided by the anal-yses of the confirmation bias filter with imperfect mem-ory, we expect that the reset of individual belief indeter-minacy should significantly change the system behaviour.Figure 26, which presents the results for m = 0 . , con-firms these expectations are true. Instead of the dis-crete jump seen for the leftist group in the unmodified m = 1 case (Figure 23), we observe smooth changes ofall group averages of beliefs (cid:104) θ (cid:105) G . Moreover, a full con-sensus is reached for finite (although small) values of f .An additional difference in the simulations for the imper-fect memory PMR filter from all cases considered so far,is that simulation runs converge to somewhat differentconfigurations. We have indicated this as error bars inFigure 26.The roughly linear dependence of (cid:104) θ (cid:105) L on f , for f > . , results from the increased individual opinion flexi-bility introduced by the admixture of the broad-mindedcomponent of the individual beliefs treated as priors. Tobetter understand this, we have studied the dependenceof (cid:104) θ (cid:105) L on the memory factor m for fixed values of f .The results are shown in Figure 27. In the case of rela-tively effective PMR filter ( f = 0 . and f = 1 . ) thereare two distinct regimes of system behaviour. Above cer-tain threshold value m T ( f ) , there is only a weak, lineardependence of (cid:104) θ (cid:105) L on m , mostly due to individual be-lief shifts during a few initial time steps, which quickly0 Figure 20.
Confirmation bias filtering, reset of belief indeterminacy.
Snapshots in the evolution of individual beliefdistributions of selected leftist agents using confirmation bias filtering of truth-related information for f = 0 . , p = 0 . withthe memory imperfection factor m = 0 . . The distributions are for a subset of randomly selected leftist agents. As the timeprogresses (clockwise) the individual belief distributions shift to higher θ values but remain rather broad-shaped (as are theoriginal distributions). Much greater number of agents move to the true belief θ T . Eventually all agents would reach consensusat this value. become frozen. On the other hand, for m smaller than m T ( f ) , all agents shift their opinions in accordance withthe information source, moving eventually to centrist andrightist positions. The value of m T ( f ) is only approxi-mate, as a consequence of the differences between individ- ual simulation runs, due to the finite size of the system.1 Figure 21.
Confirmation bias filtering, memory effects.
Time dependence of the value of (cid:104) θ (cid:105) L for the leftist group forvarious values of the memory parameter m , for f equal to 1.Reducing the value of m changes the evolution of the individ-ual beliefs, and, in consequence, the group average (cid:104) θ (cid:105) L ( t ) :for m smaller than certain value (significant broadening), allagents become ‘convinced’ by the information source and ac-cept the θ T as the centre of their belief distributions. Theconviction process is the fastest for the lowest values of m .On the other hand, for m > . the agents’ belief distri-butions remain frozen, which means that the whole systemwould exhibit significant polarization.Figure 22. Confirmation bias filtering, memory effects.
Dependence of the value of leftist group ensemble average (cid:104) θ (cid:105) L ( t ) as function of the memory parameter m , for two val-ues of the filtering effectiveness f = 1 and f = 0 . , and forthree time values, t = 1000 , 10000 and 50000 steps. Forsmall m values, the group average converges on the true value θ t = 0 . . For large m values (better memory, i.e. lesser role ofindeterminacy reset) the beliefs remain on the left side of theopinion spectrum. Increasing the time t at which we measurethe (cid:104) θ (cid:105) L ( t ) , makes the transition between two regimes (pre-serving the original opinions and accepting the true value)less gradual as function of the memory fidelity parameter m . Figure 23. Politically Motivated Reasoning filtering,perfect memory.
Dependence of the final value of (cid:104) θ (cid:105) G forthe three groups, as functions of filtering effectiveness f for thePMR filter.For f (cid:38) . the averages are almost independentof f . At f ≈ . (marked by the red ellipse), the (cid:104) θ (cid:105) L showsa large jump towards the θ T value, the result effectively turnsthe leftists into centrists. For very small values of the filteringeffectiveness ( f < . ) opinions of all three groups convergeon the true value θ t = 0 . . V. DISCUSSIONA. Time dependency considerations
The choice of the right simulation-to-reality time scal-ing may depend on the way we define the informationprocessing events. On one hand, we could consider onlythe major news and real world occurrences, such asthe crucial election stories and events. In such a case,the number of the opinion shaping encounters could betreated as relatively small, certainly not in the range oftens of thousands or thousands per month. In such aview, the time periods between the information process-ing events are long enough to allow the uncertainty reset.At the other end of the spectrum is the vision, in whichour beliefs are shaped by a continuous stream of events,differing in their source type, intensity, repetition andmany other characteristics. Some of these would orig-inate from external sources, characterized by relativelystable views and opinions (biased or unbiased at thesource), while other events could originate from more orless random encounters with other people or observationsof ostensibly small importance. In such microscopic ap-proach, the number of the events could be very large.The focus of this work was on the long term effects of2
Figure 24.
Politically Motivated Reasoning filtering, perfect memory.
Time snapshots of the individual belief distri-butions for the PMR filter for f = 0 . . The individual agent’s belief distributions at t ≈ are divided into the ‘inflexibles’– with opinions centred around θ ≈ − . , and the agents who were influenced by the news source, with their distributionscentred around θ ≈ . Thick dark line shows the average distribution of beliefs - which serves as a filter for the next time step.For f greater than 0.43 the number of the influenced agents is too small, and repeated interactions diminish the influence ofthe θ ≈ filter peak. At t = 100 all agents revert to the leftist positions. a single type of an information source, interspersed withthe periods when the individual belief structure may be-come less certain. The goal was to construct a Bayesbased filtered information processing ABM and see ifsuch approach can yield ‘reasonable results’, by which we mean, depending on the situation, conditions leadingto a general consensus, or, for other conditions, a per-sistent disagreement and polarization. The results haveshown that the model can, indeed, produce these resultsunder simple manipulation of a few key parameters.3 Figure 25.
Politically Motivated Reasoning filtering, perfect memory.
Time snapshots of the individual belief dis-tributions for the PMR filter for f = 0 . . As before, the individual agent’s belief distributions at t ≈ are divided intothe ‘inflexibles’ – with opinions centred around θ ≈ − . , and the agents who were influenced by the news source, with theirdistributions centred around θ ≈ . Thick dark line shows the average distribution of beliefs - which serves as a filter for thenext time step. For f smaller or equal 0.42 the number of the influenced agents becomes large enough to eventually dominate,and the repeated interactions move all agents to the centrist position. The jump observed in Figure 23 occurs when the numberof the influenced agents passes the necessary threshold. Due to the positive feedback, once the peak θ ≈ dominates thefiltering, the repeated filtered information processes further increase its the size in the subsequent interactions. Figure 26.
Politically Motivated Reasoning filtering,reset of belief indeterminacy.
Dependence of the finalvalue of (cid:104) θ (cid:105) G for the three groups, as functions of filteringeffectiveness f for the PMR filter with imperfect memory m = 0 . . The broadening of the individual belief distribu-tions due to the imperfect memory restores the almost lineardependence of the ensemble average value of opinion for theleftist group. The resulting opinion distribution for the leftistgroup (cid:104) θ (cid:105) L , for f > . , shows sizeable differences betweenthe individual simulation runs, which are indicated by errorbars. The question of the ‘right’ timescale for opinion changecan not be resolved by such qualitative, simplifiedmodel. Among the unknowns are the effectiveness of theBayesian update process and the filtering, the memoryimperfection related uncertainty reset scale, and the el-ements omitted in the current model, for example dif-ferences in the intensity of particular events. A morerealistic model should be based on psychological studies- which would, hopefully, provide also suggestions as towhether we should focus on the effects of a few (few tens?hundreds?) information processing events or to look atthe stable or quasi-stable states reached after thousandsof microscopic events.
B. Manipulation of the Politically MotivatedReasoning Filter
The current political developments in many demo-cratic societies show dramatically increasing levels ofpolarization, covering the general public and the me-dia (PEW [1], Baldassarri and Bearman [5], Bernhardtet al. [11], Fiorina and Abrams [31], Prior [69], Stroud[86], Tewksbury and Riles [94]). In many countries the
Figure 27.
Politically Motivated Reasoning filtering,memory effects.
Dependence of the final value of (cid:104) θ (cid:105) G forthe leftist group on the memory factor m , for two values ofthe filtering effectiveness f for the PMR filter. Dots showresults of the individual simulation runs. Red ellipses indicatethe regions close to the threshold value of m , at which thebehaviour of the system changes. Decreasing the memoryquality from the perfect case ( m = 1 ) leads initially to veryslight, linear shift in the (cid:104) θ (cid:105) L value, attributable to beliefchanges in the first few interactions. Below the thresholdvalue (which depends on f ) the group opinion average growsto approach the true value of θ T for m < . . The black linesare separate best fits of linear function (for m greater thana threshold value) and quadratic function for m smaller thanit. chances of reaching the state in which a rational discus-sions between conflicted groups (not to mention workingout a sensible compromise) seems almost impossible. Re-cent US presidential elections provide an obvious exam-ple, but the seemingly irrevocable split exists in manyother aspects, sometimes with division lines not parallelto political ones. A good example of such split is theexistence and (in many countries) growth of the anti-vaccination movements (Betsch [12], Betsch and Sachse[13], Blume [14], Davies et al. [26], Hough-Telford et al.[44], Kata [53], Leask et al. [55], McKeever et al. [57], Nel-son [58], Ołpiński [64], Streefland [85], Wolfe and Sharp[104]), which are not strictly ‘politically’ aligned. The ef-forts to convince vaccination opponents are quite unsuc-cessful, regardless of the approach used. Similar prob-lems occur in more politicized issues. This applies to thecases where suitable evidence is available, for example incontroversies over gun control policies, climate change,GMO, nuclear energy, and in cases where the beliefs andopinions are largely subjective, such as evaluations of spe-cific politicians (e.g. Hillary Clinton or Donald Trump).The difficulty in minimizing the polarization may be5partially attributed to the cognitive biases and motivatedinformation processing described in this paper. Filtering-out of information may be very effective in keeping a per-son’s beliefs unchanged. In fact, some cognitive heuristicsare evolved to provide this stability (e.g. the confirmationbias). This makes the task of bridging the gaps betweenpolarized sections of our societies seem impossible. Still,as Kahan has noted, some filtering mechanisms may bemore flexible than others.A good example is provided by comparison of the con-firmation bias and PMR. Kahan [50] notes that in somecases PMR may be confused with the confirmation bias: Someone who engages in politically motivated reasoningwill predictably form beliefs consistent with the positionthat fits her predispositions. Because she will also selec-tively credit new information based on its congeniality tothat same position, it will look like she is deriving thelikelihood ratio from her priors. However, the correlationis spurious: a ‘third variable’—her motivation to formbeliefs congenial to her identity – is the “cause” of bothher priors and her likelihood ratio assessment.
Kahannotes the importance of the difference: if the source ofthe filter is ‘internal’ (confirmation bias), we have littlehope to modify it. On the other hand, if the motivationfor filtering is related to perceptions of in-group norms, the opinions may be changed if the perception ofthese in-group norms changes . Re-framing the issuesin a language that conforms to specific in-group identify-ing characteristics or providing information that certainbeliefs are ‘in agreement’ with the value system of thein-group and/or majority of its members, would changethe PMR filtering mechanism. Through this change,more information could be allowed through, changing theBayesian likelihood function, and, eventually, changingthe posterior beliefs.
C. Model extensions and further researchdirections
The simulations presented in the current work arebased on drastically simplified assumptions: only a sin-gle source of information, with consistently repeated S ( θ ) distribution, only one type of the filter, our focus is onlong term stable conditions. These simplifications di-rectly indicate the directions of further work: dealingwith conflicting information sources, combinations of dif-ferent types of filters, transient phenomena to describeimmediate reactions to the exposure of news. Another planned model extension is related to modelling the pos-sible dynamical nature of the group norms based PMRfilter, mentioned in Section IV D. When opinions within agroup initially treated as homogeneous begin to diverge,it is quite likely that the very definition of the groupwould change. The agents could redefine the criteria who they count as the members of the in-group , treat-ing those with sufficiently different belief distributionsas outsiders (possibly with a negative emotional label oftraitors). Such a move would dynamically redefine theperceived in-group standards and norms. The resultingchange in the PMR filter could change the model dy-namics from opinion shifts to changes in group sizes andidentification.The model proposed in this work may be characterisedas a ‘ reach feature agent ’, in contrast to the simpli-fied ‘spinson’ models. To examine the possibilities of theapproach, we have focused on a system in which agentsrepeatedly react to an unchanging, single external infor-mation source. This has allowed to discover some regu-larities and to understand the roles of the model param-eters.The same general framework of biased processing of in-formation may be used in more complex environments. Itcan cover the agents interacting among themselves in ar-bitrarily chosen social networks. In such scenario, the in-put information would be generated by one of the agents(a sender) and would be received and evaluated usingthe filtering mechanisms and biases by other agent oragents (recipient(s)). Each recipient would then updateits opinion (as described by the belief distribution), and,if applicable for the bias type, also the filter function. Ofcourse, it is possible to reverse the roles of the agents andto allow bidirectional communication. Because the filtersused by the communicating agents may be different, theinteraction process may be asymmetric. It is also possi-ble to combine the agent-to-agent interactions with theinfluences of external information sources, and to createa truly complex model approximating a real society.Lastly, especially in the case of the studies of shortterm, transient changes, the possibilities of manipulationof the filters by outside agencies, offer a very interestingand important future research direction. Such investiga-tions should cover both the manipulations increasing po-larization (partisan information sources and the relianceon emotional context of the information) as well as theefforts in the opposite direction – to detect and to combatthe manipulative influences. The latter are especially im-portant to enhance the chances of a meaningful dialoguein our already highly polarized societies. [1] Political polarization in the American public. Technicalreport, Pew Research Center, 2014.[2] L.A. Adamic and N. Glance. The political blogosphereand the 2004 US election: divided they blog. In Pro-ceedings of the 3rd international workshop on Link dis- covery , pages 36–43, 2005.[3] S. Albanie, H. Shakespeare, and T. Gunter. Unknowablemanipulators: Social network curator algorithms. arXivpreprint arXiv:1701.04895 , 2017.[4] C.T. Allen, K.A. Machleit, S.S. Kleine, and A.S. No- tani. A place for emotion in attitude models. Journalof Business Research , 58(4):494–499, 2005.[5] D. Baldassarri and P. Bearman. Dynamics of politicalpolarization.
American Sociological Review , 72(5):784,2007.[6] S. G. Barsade. The ripple effect: Emotional contagionand its influence on group behavior.
Administrative Sci-ence Quarterly , 47(4):644–675, 2002.[7] E. Ben-Naim, L. Frachebourg, and P.L. Krapivsky.Coarsening and persistence in the voter model.
PhysicalReview E , 53(4):3078–3087, 1996.[8] B. Benson. Cognitive bias cheat sheet,2016. URL https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18 .[9] J. Berger and K.L. Milkman. Social transmission, emo-tion, and the virality of online content. Technical report,Wharton School, University of Pennsylvania, 2010.[10] A.T. Bernardes, U.M.S. Costa, A.D. Araujo, andD. Stauffer. Damage spreading, coarsening dynamicsand distribution of political votes in Sznajd model onsquare lattice.
International Journal of Modern PhysicsC , 12(2):159–168, 2001. .[11] D. Bernhardt, S. Krasa, and M. Polborn. Political po-larization and the electoral effects of media bias.
Journalof Public Economics , 92(5-6):1092–1104, 2008.[12] C. Betsch. Innovations in communication: the Inter-net and the psychology of vaccination decisions.
EuroSurveill , 16:17, 2011.[13] C. Betsch and K. Sachse. Debunking vaccination myths:Strong risk negations can increase perceived vaccinationrisks.
Health Psychology , 32(2):146, 2013.[14] S. Blume. Anti-vaccination movements and their inter-pretations.
Social science & medicine , 62(3):628–642,2006.[15] T. Bosse, M. Hoogendoorn, M.C.A. Klein, J. Treur,C.N. Van Der Wal, and A. Van Wissen. Modelling col-lective decision making in groups and crowds: Integrat-ing social contagion and interacting emotions, beliefsand intentions.
Autonomous Agents and Multi-AgentSystems , 27(1):52–84, 2013.[16] J.G. Bullock. Partisan bias and the bayesian ideal inthe study of public opinion.
The Journal of Politics , 71(03):1109–1124, 2009.[17] F.R. Campante and D.A. Hojman. Media and polar-ization. Technical report, Harvard University, John F.Kennedy School of Government, 2010.[18] F. Caruso and P. Castorina. Opinion dynamics and de-cision of vote in bipolar political systems.
InternationalJournal of Modern Physics C , 16(09):1473–1487, 2005.[19] C. Castellano. Social influence and the dynamics ofopinions: The approach of statistical physics.
Manage-rial and Decision Economics , 2012.[20] C. Castellano, D. Vilone, and A. Vespignani. Incompleteordering of the voter model on small-world networks.
EPL (Europhysics Letters) , 63:153, 2003.[21] C. Castellano, S. Fortunato, and V. Loreto. Statisticalphysics of social dynamics.
Rev. Mod. Phys. , 81:591–646, 2009.[22] A. Chmiel, J. Sienkiewicz, M. Thelwall, G. Paltoglou,K. Buckley, A. Kappas, and J.A. Hołyst. Collectiveemotions online and their influence on community life.
PLOS One , 6(7):e22207, 2011.[23] A. Chmiel, P. Sobkowicz, J. Sienkiewicz, G. Paltoglou, K. Buckley, M. Thelwall, and J.A. Hołyst. Negativeemotions boost users activity at BBC forum.
PhysicaA , 390(16):2936–2944, 2011.[24] G.L. Clore and J.R. Huntsinger. How emotions informjudgment and regulate thought.
Trends in CognitiveSciences , 11(9):393, 2007.[25] J.T. Cox and D. Griffeath. Diffusive clustering in thetwo dimensional voter model.
The Annals of Probability ,14(2):347–370, 1986.[26] P. Davies, S. Chapman, and J. Leask. Antivaccinationactivists on the World Wide Web.
Archives of diseasein childhood , 87(1):22–25, 2002.[27] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch.Mixing beliefs among interacting agents.
Advances inComplex Systems , 3:87–98, 2000.[28] G. Deffuant, F. Amblard, G. Weisbuch, and T. Faure.How can extremism prevail? A study based on the rel-ative agreement interaction model.
Journal of ArtificialSocieties and Social Simulation , 5(4), 2002.[29] S. Djamasbi, M. Siegel, J. Skorinko, and T. Tullis. On-line viewing and aesthetic preferences of generation yand the baby boom generation: Testing user web siteexperience through eye tracking.
International Journalof Electronic Commerce , 15(4):121–158, 2011.[30] S. Djamasbi, J. Rochford, A. DaBoll-Lavoie, T. Greff,J. Lally, and K. McAvoy. Text simplification and userexperience. In
International Conference on AugmentedCognition , pages 285–295. Springer, 2016.[31] M.P. Fiorina and S.J. Abrams. Political polarization inthe American public.
Annu. Rev. Polit. Sci. , 11:563–588, 2008.[32] A. Fonseca and J. Louca. Political opinion dy-namics in social networks: the Portuguese 2010-11case study, 2015. URL .[33] S. Fortunato and C. Castellano. Scaling and universalityin proportional elections.
Physical Review Letters , 99(13):138701, 2007.[34] S. Galam.
Sociophysics: a physicist’s modeling ofpsycho-political phenomena . Springer, 2012.[35] S. Galam, B. Chopard, and M. Droz. Killer geometriesin competing species dynamics.
Physica A: StatisticalMechanics and its Applications , 314(1):256–263, 2002.[36] S. Galam. The trump phenomenon, an explanation fromsociophysics. arXiv preprint arXiv:1609.03933 , 2016.[37] J. Haidt. The emotional dog and its rational tail: asocial intuitionist approach to moral judgment.
Psy-chological review , 108(4):814–834, 2001.[38] J. Haidt. Left and right, right and wrong.
Science , 337:525–526, 2012.[39] J. Haidt. The new synthesis in moral psychology.
Sci-ence , 316(5827):998–1002, 2007.[40] Jonathan Haidt.
The righteous mind: Why good peopleare divided by politics and religion . Vintage, 2012.[41] E. Hatfield, J.T. Cacioppo, and R.L. Rapson. Emotionalcontagion.
Current Directions in Psychological Science ,2(3):96–99, 1993.[42] R. Hegselmann and U. Krause. Opinion dynamicsand bounded confidence models, analysis, and simula-tion.
Journal of Artifical Societies and Social Simulation(JASSS) vol , 5(3), 2002. [43] J.A. Hołyst, K. Kacperski, and F. Schweitzer. Socialimpact models of opinion dynamics. Annual Review ofComput. Phys. , IX:253–273, 2001.[44] C. Hough-Telford, D.W. Kimberlin, I. Aban, W.P.Hitchcock, J. Almquist, R. Kratz, and K.G. O’Connor.Vaccine delays, refusals, and patient dismissals: A sur-vey of pediatricians.
Pediatrics , 2016.[45] J. Jerit and J. Barabas. Partisan perceptual bias andthe information environment.
Journal of Politics , 74(3):672–684, 2012.[46] J.T. Jost, E.P. Hennes, and H. Lavine. Hot politicalcognition: Its self-, group-, and system-serving pur-poses.
Oxford handbook of social cognition , pages 851–875, 2013.[47] J.T. Jost, J. Glaser, A.W. Kruglanski, and F.J. Sul-loway. Political conservatism as motivated social cogni-tion.
PSYCHOLOGICAL BULLETIN , 129(3):339–375,2003.[48] K. Kacperski and J.A. Hołyst. Opinion formation modelwith strong leader and external impact: a mean fieldapproach.
Physica A , 269:511–526, 1999.[49] K. Kacperski and J.A. Hołyst. Phase transitions as apersistent feature of groups with leaders in models ofopinion formation.
Physica A , 287:631–643, 2000.[50] D.M. Kahan. The politically motivated reasoningparadigm, part 1: What politically motivated reasoningis and how to measure it. In R Scott and S. Kosslyn,editors,
Emerging Trends in the Social and BehavioralSciences . Wiley Online Library, 2016.[51] D.M. Kahan. The politically motivated reasoningparadigm, part 2: Unanswered questions. In R. Scottand S. Kosslyn, editors,
Emerging Trends in the Socialand Behavioral Sciences . Wiley Online Library, 2016.[52] D. Kahneman.
Thinking, fast and slow . Macmillan,2011.[53] A. Kata. A postmodern Pandora’s box: anti-vaccinationmisinformation on the Internet.
Vaccine , 28(7):1709–1716, 2010.[54] E. Lawrence, J. Sides, and H. Farrell. Self-segregationor deliberation? Blog readership, participation, and po-larization in American politics.
Perspectives on Politics ,8(01):141–157, 2010.[55] J. Leask, S. Chapman, P. Hawe, and M. Burgess. Whatmaintains parental support for vaccination when chal-lenged by anti-vaccination messages? A qualitativestudy.
Vaccine , 24(49):7238–7245, 2006.[56] A.C.R. Martins. Bayesian updating rules in continu-ous opinion dynamics models.
Journal of StatisticalMechanics: Theory and Experiment , 2009(02):P02017,2009.[57] B.W. McKeever, R. McKeever, A.E. Holton, and J.-Y.Li. Silent majority: Childhood vaccinations and an-tecedents to communicative action.
Mass Communica-tion and Society , 19:476–498, 2016.[58] K. Nelson. Markers of trust: How pro-and anti-vaccination web sites make their case. Technical Report1579525, SSRN, 2010.[59] V. Ngampruetikorn and G.J. Stephens. Bias, belief, andconsensus: Collective opinion formation on fluctuatingnetworks.
Physical Review E , 94(5):052312, 2016.[60] R. Nielek, A. Wawer, and A. Wierzbicki. Spiral of ha-tred: social effects in Internet auctions. Between infor-mativity and emotion.
Electronic Commerce Research ,10:313–330, 2010. [61] A. Nowak and M. Lewenstein. Modeling social changewith cellular automata. In Rainer Hegselmann, UlrichMueller, and Klaus G. Troitzsch, editors,
Modelling andSimulation in the Social Sciences From A Philosophyof Science Point of View , pages 249–285. Kluver, Dor-drecht, 1996.[62] A. Nowak, J. Szamrej, and B. Latané. From privateattitude to public opinion: A dynamic theory of socialimpact.
Psychological Review , 97(3):362–376, 1990.[63] P. Nyczka and K. Sznajd-Weron. Anticonformity or in-dependence? – insights from statistical physics.
Journalof Statistical Physics , 151:174–202, 2013.[64] M. Ołpiński. Anti-vaccination movement and parentalrefusals of immunization of children in USA.
Pediatriapolska , 87(4):381–385, 2012.[65] J.J. Opaluch and K. Segerson. Rational roots of ‘ir-rational’ behavior: new theories of economic decision-making.
Northeastern Journal of Agricultural and Re-source Economics , 18(2):81–95, 1989.[66] F. Palombi and S. Toti. Voting behavior in proportionalelections from agent–based models.
Physics Procedia ,62:42–47, 2015.[67] E. Pariser.
The filter bubble: What the Internet is hidingfrom you . Penguin UK, 2011.[68] C.N. Parkinson.
Parkinson’s Law: The Pursuit ofProgress . John Murray, 1958.[69] M. Prior. Media and political polarization.
Annual Re-view of Political Science , 16:101–127, 2013.[70] X. Qiu, D.F.M. Oliveira, A.S. Shirazi, A. Flammini, andF. Menczer. Lack of quality discrimination in onlineinformation markets. arXiv preprint arXiv:1701.02694 ,2017.[71] M. Reifen Tagar, C.M. Federico, and E. Halperin. Thepositive effect of negative emotions in protracted con-flict: The case of anger.
Journal of Experimental SocialPsychology , 47(1):157–164, 2011.[72] L. Sabatelli and P. Richmond. Phase transitions, mem-ory and frustration in a sznajd-like model with syn-chronous updating.
International Journal of ModernPhysics C , 14:1223–1229, 2003.[73] L. Sabatelli and P. Richmond. Non-monotonic sponta-neous magnetization in a sznajd-like consensus model.
Physica A: Statistical Mechanics and its Applications ,334(1):274–280, 2004.[74] F. Slanina and H. Lavicka. Analytical results for thesznajd model of opinion formation.
European PhysicalJournal B - Condensed Matter , 35(2):279 – 288, 2003.[75] P. Sobkowicz. Modelling opinion formation with physicstools: call for closer link with reality.
Journal of Artifi-cial Societies and Social Simulation , 12(1):11, 2009.[76] P. Sobkowicz. Effect of leader’s strategy on opinion for-mation in networked societies with local interactions.
International Journal of Modern Physics C (IJMPC) ,21(6):839–852, 2010.[77] P. Sobkowicz. Discrete model of opinion changes us-ing knowledge and emotions as control variables.
PLOSOne , 7(9):e44489, 09 2012.[78] P. Sobkowicz. Minority persistence in agent based modelusing information and emotional arousal as control vari-ables.
The European Physical Journal B , 86(7):1–11,2013. .[79] P. Sobkowicz. Quantitative agent based model of userbehavior in an Internet discussion forum.
PLOS One , 8(12):e80524, 2013. [80] P Sobkowicz and A Sobkowicz. Dynamics of hate basedInternet user networks. The European Physical JournalB , 73(4):633–643, 2010. .[81] P. Sobkowicz. Quantitative agent based model of opin-ion dynamics: Polish elections of 2015.
PloS one , 11(5):e0155098, 2016.[82] D. Stauffer. Monte Carlo simulations of Sznajd models.
Journal of Artificial Societies and Social Simulation , 5(1), 2001.[83] D. Stauffer. Sociophysics: the sznajd model and itsapplications.
Computer physics communications , 146(1):93–98, 2002.[84] D. Stauffer and P. M. C. de Oliveira. Persistence ofopinion in the sznajd consensus model: computer sim-ulation.
The European Physical Journal B-CondensedMatter , 30(4):587–592, 2002.[85] P.H. Streefland. Public doubts about vaccination safetyand resistance against vaccination.
Health policy , 55(3):159–172, 2001.[86] N.J. Stroud. Polarization and partisan selective expo-sure.
Journal of Communication , 60(3):556–576, 2010.[87] W. Suen. The self-perpetuation of biased beliefs.
TheEconomic Journal , 114(495):377–396, 2004.[88] C.R. Sunstein.
Risk and reason: Safety, law, and theenvironment . Cambridge University Press, 2002.[89] C.R. Sunstein. The availability heuristic, intuitive cost-benefit analysis, and climate change.
Climatic Change ,77(1-2):195–210, 2006.[90] C.R. Sunstein, S. Bobadilla-Suarez, S.C. Lazzaro, andT. Sharot. How people update beliefs about climatechange: Good news and bad news.
Available at SSRN2821919 , 2016.[91] C.R. Sunstein. Deliberative trouble? why groups go toextremes.
The Yale Law Journal , 110(1):71–119, 2000.[92] K. Sznajd-Weron and J. Sznajd. Opinion evolution inclosed community.
Int. J. Mod. Phys. C , 11:1157–1166,2000.[93] S. Tafuri, M.S. Gallone, M.G. Cappelli, D. Martinelli,R. Prato, and C. Germinario. Addressing the anti-vaccination movement and the role of HCWs.
Vaccine ,2013.[94] D. Tewksbury and J.M. Riles. Polarization as a functionof citizen predispositions and exposure to news on the Internet.
Journal of Broadcasting & Electronic Media ,59(3):381–398, 2015.[95] P. Thagard and S. Findlay. Changing minds aboutclimate change: Belief revision, coherence, and emo-tion. In E.J. Olsson and S. Enqvist, editors,
Belief revi-sion meets philosophy of science: Logic, Epistemology,and the Unity of Science , pages 329–345. Springer Sci-ence+Business Media B.V, 2011.[96] A. Tversky and D. Kahneman. Judgment under un-certainty: Heuristics and biases.
Science , 185(4157):1124–1131, 1974.[97] A. Tversky and D. Kahneman. The framing of decisionsand the psychology of choice.
SCIENCE , 211:30, 1981.[98] A. Tversky and D. Kahneman. Rational choice and theframing of decisions.
The Journal of Business , 59(4):S251–S278, 1986.[99] A. Tversky and D. Kahneman. Probabilistic reasoning.
Readings in philosophy and cognitive science , pages 43–68, 1993.[100] A. Tversky, P. Slovic, and D. Kahneman. The causesof preference reversal.
The American Economic Review ,80(1):204–217, 1990.[101] G. Weisbuch, G. Deffuant, F. Amblard, and J.-P. Nadal.Interacting agents and continuous opinions dynamics. InR. Cowan and N. Jonard, editors,
Heterogenous Agents,Interactions and Economic Performance , volume 521 of
Lecture Notes in Economics and Mathematical Systems ,pages 225–242. Springer Berlin Heidelberg, 2003.[102] G. Weisbuch. Bounded confidence and social networks.
The European Physical Journal B-Condensed Matterand Complex Systems , 38(2):339–343, 2004.[103] M. Wojcieszak, B. Bimber, L. Feldman, and N.J.Stroud. Partisan news and political participation: Ex-ploring mediated relationships.
Political Communica-tion , 33(2):241–260, 2016.[104] R.M. Wolfe and L.K. Sharp. Anti-vaccinationists pastand present.
BMJ: British Medical Journal , 325(7361):430, 2002.[105] R.M. Wolfe, L.K. Sharp, and M.S. Lipsky. Content anddesign attributes of antivaccination web sites.