Characterizing Political Fake News in Twitter by its Meta-Data
CCharacterizing Political Fake News in Twitter by its Meta-Data
Julio Amador D´ıaz L´opez Axel Oehmichen Miguel Molina-Solana ( j.amador, axelfrancois.oehmichen11, [email protected] )
Imperial College London
Abstract
This article presents a preliminary approach towards characterizing political fake news on Twitterthrough the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected onthe day of the election of Donald Trump as 45th president of the United States of America. We use themeta-data embedded within those tweets in order to look for differences between tweets containing fakenews and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral,by studying proxies for users’ exposure to the tweets, by characterizing accounts spreading fake news,and by looking at their polarization. We found significant differences on the distribution of followers, thenumber of URLs on tweets, and the verification of the users.
Introduction
While fake news , understood as deliberately mislead-ing pieces of information, have existed since longago (e.g. it is not unusual to receive news falselyclaiming the death of a celebrity), the term reachedthe mainstream, particularly so in politics, duringthe 2016 presidential election in the United States. Since then, governments and corporations alike (e.g.Google and Facebook ) have begun efforts to tacklefake news as they can affect political decisions. Yet,the ability to define, identify and stop fake news fromspreading is limited.Since the Obama campaign in 2008, social mediahas been pervasive in the political arena in the UnitedStates. Studies report that up to 62% of Americanadults receive their news from social media. Thewide use of platforms such as Twitter and Facebookhas facilitated the diffusion of fake news by simpli-fying the process of receiving content with no sig-nificant third party filtering, fact-checking or edito-rial judgement. Such characteristics make these plat-forms suitable means for sharing news that, disguisedas legit ones, try to confuse readers.Such use and their prominent rise has been con-firmed by Craig Silverman, a Canadian journalistwho is a prominent figure on fake news: “In thefinal three months of the US presidential campaign,the top-performing fake election news stories on Face-book generated more engagement than the top storiesfrom major news outlet”.Our current research hence departs from the as-sumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled andeventually blocked. In order to do so, we use a sam-ple of more that 1.5M tweets collected on November8th 2016 —election day in the United States— withthe goal of identifying features that tweets contain- ing fake news are likely to have. As such, our paperaims to provide a preliminary characterization of fakenews in Twitter by looking into meta-data embeddedin tweets. Considering meta-data as a relevant factorof analysis is in line with findings reported by Morriset al. We argue that understanding differences be-tween tweets containing fake news and regular tweetswill allow researchers to design mechanisms to blockfake news in Twitter.Specifically, our goals are: 1) compare the char-acteristics of tweets labelled as containing fake newsto tweets labelled as not containing them, 2) char-acterize, through their meta-data, viral tweets con-taining fake news and the accounts from which theyoriginated, and 3) determine the extent to whichtweets containing fake news expressed polarized po-litical views.For our study, we used the number of retweets tosingle-out those that went viral within our sample.Tweets within that subset (viral tweets hereafter) arevaried and relate to different topics. We consider thata tweet contains fake news if its text falls within anyof the following categories described by Rubin et al. (see next section for the details of such categories):serious fabrication, large-scale hoaxes, jokes taken atface value, slanted reporting of real facts and storieswhere the truth is contentious. The dataset, manu-ally labelled by an expert, has been publicly releasedand is available to researchers and interested parties.From our results, the following main observationscan be made: • Distribution in the number of retweets,favourites and hashtags in tweets containingfake news are not significantly different fromtheir counterparts in tweets not containing fakenews. • Accounts generating fake news are compara-tively more unverified that accounts not produc-1 a r X i v : . [ c s . C L ] D ec ng fake news. • There are significant differences in both thenumber of friends and followers of the accountscreating tweets with fake news when comparedwith accounts not generating them. • There are no significant differences in the num-ber of media elements, but there are indicationsthat the number of URLs it is indeed different.Our findings resonate with similar work doneon fake news such as the one from Allcot andGentzkow. Therefore, even if our study is a prelim-inary attempt at characterizing fake news on Twitterusing only their meta-data, our results provide exter-nal validity to previous research. Moreover, our worknot only stresses the importance of using meta-data,but also underscores which parameters may be usefulto identify fake news on Twitter.The rest of the paper is organized as follows. Thenext section briefly discusses where this work is lo-cated within the literature on fake news and con-textualizes the type of fake news we are studying.Then, we present our hypotheses, the data, and themethodology we follow. Finally, we present our find-ings, conclusions of this study, and future lines ofwork.
Defining Fake news
Our research is connected to different strands of aca-demic knowledge related to the phenomenon of fakenews. In relation to Computer Science, a recent sur-vey by Conroy and colleagues identifies two popularapproaches to single-out fake news. On the one hand,the authors pointed to linguistic approaches consist-ing in using text, its linguistic characteristics and ma-chine learning techniques to automatically flag fakenews. On the other, these researchers underscoredthe use of network approaches, which make use ofnetwork characteristics and meta-data, to identifyfake news.With respect to social sciences, efforts from psy-chology, political science and sociology, have beendedicated to understand why people consume and/orbelieve misinformation. Most of these studiesconsistently reported that psychological biases suchas priming effects and confirmation bias play an im-portant role in people ability to discern misinforma-tion.In relation to the production and distribution offake news, a recent paper in the field of Economics found that most fake news sites use names that re-semble those of legitimate organizations, and thatsites supplying fake news tend to be short-lived.These authors also noticed that fake news items aremore likely shared than legitimate articles comingfrom trusted sources, and they tend to exhibit alarger level of polarization. The conceptual issue of how to define fake newsis a serious and unresolved issue. As the focus ofour work is not attempting to offer light on this, wewill rely on work by other authors to describe whatwe consider as fake news. In particular, we use thecategorization provided by Rubin et al. The fivecategories they described, together with illustrativeexamples from our dataset, are as follows:1.
Serious fabrication.
These are news storiescreated entirely to deceive readers. During the2016 US presidential election there were plentyof examples of this (e.g. claiming a celebrity hasendorsed Donald Trump when that was not thecase). For instance: [@JebBush -
Maybe Donaldnegotiated a deal with his buddy @HillaryClin-ton. Continuing this path will put her in theWhite House. https: // t. co/ AlvByiSrMn ]2.
Large-scale hoaxes.
Deceptions that are thenreported in good faith by reputable sources. Arecent example would be the story that thefounder of Corona beer made everyone in hishome village a millionaire in his will. Forinstance: [@FullFrontalSamB -
UnfortunatelyMelania copied HER ballot from Michelle so...Donald just voted for Hillary. https: // t. co/ x2ZimtFxyl ]3.
Jokes taken at face value.
Humour sitessuch as the Onion or Daily Mash present fakenews stories in order to satirise the media. Is-sues can arise when readers see the story outof context and share it with others. For in-stance: [@BBCTaster -
BREAKING NEWS: Ifyou face-swap @realDonaldTrump with @Mayo-rofLondon you get Owen Wilson. https: // t.co/ YY8a20wQVP ]4.
Slanted reporting of real facts.
Selectively-chosen but truthful elements of a story put to-gether to serve an agenda. One of the mostprevalent examples of this is the well-knownproblems of voting machine faults. For instance:[@NeilTurner - @realDonaldTrump Trump pre-dicted it. https: // t. co/BM3UxA7heR ]5.
Stories where the ‘truth’ is contentious.
On issues where ideologies or opinions clash —for example, territorial conflicts— there is some-times no established baseline for truth. Re-porters may be unconsciously partisan, or per-ceived as such. For instance: [@FoxNews -
Re-port: @HillaryClinton’s plan would raise taxes $ https: // t. co/ Dh1tWM4FAP ] Research Hypotheses
Previous works on the area (presented in the sectionabove) suggest that there may be important deter-minants for the adoption and diffusion of fake news.2ur hypotheses builds on them and identifies threeimportant dimensions that may help distinguishingfake news from legit information:1.
Exposure.
Given that psychological effectssuch as priming and confirmation biases arelikely to increase the probability an individualbelieves in a certain piece of information, we be-lieve exposure to misinformation is an importantdeterminant of a fake news distribution strategy.2.
Characterization.
Given that distributors offake news may want to simulate legitimate in-formation outlets, we believe it is important toanalyse specific features that may help a fakenews outlet ‘disguise’ as a legit one.3.
Polarization.
Given that fake news outlets aremore likely to attract attention with polarizingcontent (See ), we believe the level of polariza-tion is an important determinant of a fake newsdistribution strategy.Taking those three dimensions into account, wepropose the following hypotheses about the featuresthat we believe can help to identify tweets containingfake news from those not containing them. They willbe later tested over our collected dataset. Exposure.
H1A:
The average number of retweets of a viraltweet containing fake news is larger than thatof viral tweets not containing them.
H1B:
The average number of hashtags and usermentions in viral tweets with fake news is largerthan that of viral tweets with no fake news inthem.
Characterization.
H2A:
Viral tweets containing fake news have alarger number of URLs.
H2B:
Creation date of an account generating tweetswith fake news is more recent that those ac-counts tweeting non-fake news content.
H2C:
The rate of friends/followers of accountstweeting fake news is larger than the rate ofthose creating tweets without them.
Polarization.
H3:
Viral tweets containing fake news are slantedtowards one candidate.
Data and Methodology
For this study, we collected publicly availabletweets using Twitter’s public API. Given the natureof the data, it is important to emphasize thatsuch tweets are subject to Twitter’s terms andconditions which indicate that users consent to the collection, transfer, manipulation, storage, anddisclosure of data. Therefore, we do not expectethical, legal, or social implications from the us-age of the tweets. Our data was collected usingsearch terms related to the presidential electionheld in the United States on November 8th 2016.Particularly, we queried Twitter’s streaming API,more precisely the filter endpoint of the streamingAPI, using the following hashtags and user handles: , , , @realDonaldTrump and @HillaryClinton . Thedata collection ran for just one day (Nov 8th 2016).One straightforward way of sharing information onTwitter is by using the retweet functionality, whichenables a user to share a exact copy of a tweet withhis followers. Among the reasons for retweeting,Body et al. reported the will to: 1) spread tweetsto a new audience, 2) to show one’s role as a listener,and 3) to agree with someone or validate the thoughtsof others. As indicated, our initial interest is to char-acterize tweets containing fake news that went viral(as they are the most harmful ones, as they reach awider audience), and understand how it differs fromother viral tweets (that do not contain fake news).For our study, we consider that a tweet went viral ifit was retweeted more than 1000 times.Once we have the dataset of viral tweets, we elim-inated duplicates (some of the tweets were collectedseveral times because they had several handles) andan expert manually inspected the text field withinthe tweets to label them as containing fake news, ornot containing them (according to the characteriza-tion presented before). This annotated dataset ispublicly available and can be freely reused.Finally, we use the following fields within tweets(from the ones returned by Twitter’s API) to com-pare their distributions and look for differences be-tween viral tweets containing fake news and viraltweets not containing fake news : • Exposure: created at , retweet count , favourites count and hashtags . • Characterization. screen name , verified , urls , followers count , friends count and media . • Polarization. text and hashtags .In the following section, we provide graphical de-scriptions of the distribution of each of the identifiedattributes for the two sets of tweets (those labelled ascontaining fake news and those labelled as not con-taining them). Where appropriate, we normalizedand/or took logarithms of the data for better repre-sentation. To gain a better understanding of the sig-nificance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both dis-tributions are equal.3 esults
The sample collected consisted on 1 785 855 tweetspublished by 848 196 different users. Within oursample, we identified 1327 tweets that went viral(retweeted more than 1000 times by the 8th ofNovember 2016) produced by 643 users. Such smallsubset of viral tweets were retweeted on 290 841 oc-casions in the observed time-window.The 1327 ‘viral’ tweets were manually annotatedas containing fake news or not. The annotation wascarried out by a single person in order to obtain aconsistent annotation throughout the dataset. Outof those 1327 tweets, we identified 136 as potentiallycontaining fake news (according to the categories pre-viously described), and the rest were classified as ‘noncontaining fake news’. Note that the categorizationis far from being perfect given the ambiguity of fakenews themselves and human judgement involved inthe process of categorization. Because of this, wedo not claim that this dataset can be considered aground truth.The following results detail characteristics of thesetweets along the previously mentioned dimensions.Table 1 reports the actual differences (together withtheir associated p-values) of the distributions of vi-ral tweets containing fake news and viral tweets notcontaining them for every variable considered.Kolmogorov-Smirnov testfeature difference p-valueFollowers 0.2357 2.6E-6Friends 0.1747 0.0012URLs 0.1285 0.0358Favourites 0.1218 0.0535Mentions 0.1135 0.0862Media 0.0948 0.2231Retweets 0.0609 0.7560Hashtags 0.0350 0.9983Table 1: For each one of the selected features, thetable shows the difference between the set of tweetscontaining fake news and those non containing them,and the associated p-value (applying a Kolmogorov-Smirnov test). The null hypothesis is that both dis-tributions are equal (two sided). Results are orderedby decreasing p-value.
Exposure
Figure 1 shows that, in contrast to other kinds of viraltweets, those containing fake news were created morerecently. As such, Twitter users were exposed to fakenews related to the election for a shorter period oftime.However, in terms of retweets, Figure 2 showsno apparent difference between containing fake newsor not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discardthe hypothesis that the associated distributions areequal.Figure 1: Distribution of the date of creation of thetweets that were viral on November 8th. For clarity,the image only shows the year 2016, and no morethan 150 tweets per day.Figure 2: Density distributions of achieved retweetsfor tweets in our dataset 1)containing fake news and2)not containing them. No differences are apparent.Figure 3: Density distributions of the number offavourites that the user generating the tweet has.The differences are not statistically significant.4igure 4: Distribution of the number of hashtags used in tweets labelled as containing fake news and thoselabelled as not containing them.In relation to the number of favourites, users thatgenerated at least a viral tweet containing fake newsappear to have, on average, less favourites than usersthat do not generate them. Figure 3 shows the dis-tribution of favourites. Despite the apparent visualdifferences, the difference are not statistically signif-icant.Finally, the number of hashtags used in viral fakenews appears to be larger than those in other viraltweets. Figure 4 shows the density distribution ofthe number of hashtags used. However, once again,we were not able to find any statistical differencebetween the average number of hashtags in a viraltweet and the average number of hashtags in viralfake news.
Characterization
We found that 82 users within our sample werespreading fake news (i.e. they produced at least onetweet which was labelled as fake news). Out of those,34 had verified accounts, and the rest were unverified.From the 48 unverified accounts, 6 have been sus-pended by Twitter at the date of writing, 3 tried toimitate legitimate accounts of others, and 4 accountshave been already deleted. Figure 5 shows the pro-portion of verified accounts to unverified accounts forviral tweets (containing fake news vs. not containingfake news). From the chart, it is clear that there isa higher chance of fake news coming from unverifiedaccounts.Turning to friends, accounts distributing fake newsappear to have, on average, the same number offriends than those distributing tweets with no fake Figure 5: Tweets labelled as containing fake newsmostly come from non-verified users. This contrastswith the opposite pattern for tweets non containingthem (which mostly originate from verified accounts).news. However, the density distribution of friendsfrom the accounts (Figure 6) shows that there is in-deed a statistically significant difference in their dis-tributions.If we take into consideration the number of follow-ers, accounts generating viral tweets with fake newsdo have a very different distribution on this dimen-sion, compared to those accounts generating viraltweets with no fake news (see Figure 7). In fact,such differences are statistically significant.A useful representation for friends and followers isthe ratio between friends/followers. Figures 8 and9 show this representation. Notice that accountsspreading viral tweets with fake news have, on av-erage, a larger ratio of friends/followers. The distri-bution of those accounts not generating fake news is5igure 6: Density distributions (for tweets labelled ascontaining fake news, and tweets labelled as not con-taining them) of the number of friends that the usergenerating the tweet has. Difference is statisticallysignificant.Figure 7: Density distributions of the number offollowers that the accounts generating viral tweets(within our sample) have. Accounts producing fakenews have a narrower window of followers.Figure 8: Density distribution of friends/followers ra-tio, showing quartiles. Accounts that generate fakenews tend to have a higher ratio value.more evenly distributed.With respect to the number of mentions, Figure 10shows that viral tweets labelled as containing fakenews appear to use mentions to other users less fre-quently than viral tweets not containing fake news.In other words, tweets containing fake news mostlycontain 1 mention, whereas other tweets tend to havetwo). Such differences are statistically significant. Figure 9: Density distribution of friends/followersratio. Note that they do not follow a normal dis-tribution. A higher friends/followers ratio exists foraccounts that has at least produced a tweet labelledas containing fake news.Figure 10: Number of mentions within tweets la-belled as containing fake news and tweets not con-taining them. There is almost a similar distributionof 1 and 2 mentions for tweets containing fake news.This contrasts with tweets not containing fake news,in which 2 mentions is much more common.The analysis (Figure 11) of the presence of mediain the tweets in our dataset shows that tweets labelledas not containing fake news appear to present moremedia elements than those labelled as fake news.However, the difference is not statistically significant.On the other hand, Figure 12 shows that viraltweets containing fake news appear to include moreURLs to other sites than viral tweets that do notcontain fake news. In fact, the difference betweenthe two distributions is statistically significant (as-suming α = 0 . Polarization
Finally, manual inspection of the text field of thoseviral tweets labelled as containing fake news showsthat 117 of such tweets expressed support for Don-ald Trump, while only 8 supported Hillary Clinton.The remaining tweets contained fake news related toother topics, not expressing support for any of thecandidates.6igure 11: Number of media elements embeddedwithin viral tweets (labelled as containing fake newsvs. labelled as not containing them)Figure 12: Number of URLs embedded within viraltweets (with fake news vs. without them). Differ-ences are statistically significant with α = 0 . Discussion
As a summary, and constrained by our existingdataset, we made the following observations regard-ing differences between viral tweets labelled as con-taining fake news and viral tweets labelled as notcontaining them: • Less than 0 .
1% of the tweets went viral. Out ofthose, only 10% were labelled as containing fakenews. • Tweets containing fake news that became viralduring the day of the election were mostly cre-ated very shortly before that day or in the day.That contrasts with tweets not containing fakenews (which were initially created much beforeelection day). • Considering retweets, favourites and hashtags asproxies for exposures, we did not find any differ-ence between viral tweets labelled as containingfake news and viral tweets labelled as not con-taining them. • The characterization of accounts spreading fakenews has shown that the proportion of unveri-fied accounts that generates at least a tweet con-taining fake news is larger than that of accountsspreading tweets not labelled as fake news. • Even if the accounts producing fake news are,on average, following the same number of otherusers than those producing tweet with no fakenews in them, the distribution of followers arestatistically different. • There is no significant difference between thenumber of media elements in viral tweets la-belled as containing fake news and viral tweetslabelled as not containing them. • Viral tweets labelled as containing fake newstend to have more URLs than viral tweets withlabelled as not containing fake news. • Regarding polarization, fake news were heavilysupportive of the Trump campaign.These findings (related to our initial hypothesis inTable 2) clearly suggest that there are specific piecesof meta-data about tweets that may allow the iden-tification of fake news. One such parameter is thetime of exposure. Viral tweets containing fake newsare shorter-lived than those containing other type ofcontent. This notion seems to resonate with our find-ings showing that a number of accounts spreadingfake news have already been deleted or suspendedby Twitter by the time of writing. If one considersthat researchers using different data have found simi-lar results, it appears that the lifetime of accounts,together with the age of the questioned viral contentcould be useful to identify fake news. In the lightof this finding, accounts newly created should prob-ably put under higher scrutiny than older ones. Thisin fact, would be a nice a-priori bias for a Bayesianclassifier.Accounts spreading fake news appear to have alarger proportion of friends/followers (i.e. they have,on average, the same number of friends but a smallernumber of followers) than those spreading viral con-tent only. Together with the fact that, on average,tweets containing fake news have more URLs thanthose spreading viral content, it is possible to hy-pothesize that, both, the ratio of friends/followers ofthe account producing a viral tweet and number ofURLs contained in such a tweet could be useful tosingle-out fake news in Twitter. Not only that, butour finding related to the number of URLs is in linewith intuitions behind the incentives to create fakenews commonly found in the literature (in partic-ular that of obtaining revenue through click-throughadvertising).Finally, it is interesting to notice that the contentof viral fake news was highly polarized. This findingis also in line with those of Alcott et al. This fea-ture suggests that textual sentiment analysis of the7ypothesisH1A: The average number of retweets of a viral tweet containing fake news is largerthan that of viral tweets not containing them NOT CONFIRMEDH1B: The average number of hashtags and user mentions in viral tweets with fakenews is larger than that of viral tweets with no fake news in them. NOT CONFIRMEDH2A: Viral tweets containing fake news have a larger number of URLs. CONFIRMEDH2B: Creation date of an account generating tweets with fake news is more recentthat those accounts tweeting non-fake news content. CONFIRMEDH2C: The rate of friends/followers of accounts tweeting fake news is larger than therate of those creating tweets without them. CONFIRMEDH3: Viral tweets containing fake news are slanted towards one candidate. CONFIRMEDTable 2: Summary of our conclusions, and tested hypothesiscontent of tweets (as most researchers do), togetherwith the above mentioned parameters from meta-data, may prove useful for identifying fake news.
Conclusions
With the election of Donald Trump as President ofthe United States, the concept of fake news has be-come a broadly-known phenomenon that is gettingtremendous attention from governments and mediacompanies. We have presented a preliminary studyon the meta-data of a publicly available dataset oftweets that became viral during the day of the 2016US presidential election. Our aim is to advance theunderstanding of which features might be character-istic of viral tweets containing fake news in compari-son with viral tweets without fake news.We believe that the only way to automaticallyidentify those deceitful tweets (i.e. containing fakenews) is by actually understanding and modellingthem. Only then, the automation of the processesof tagging and blocking these tweets can be success-fully performed. In the same way that spam wasfought, we anticipate fake news will suffer a similarevolution, with social platforms implementing toolsto deal with them. With most works so far focusingon the actual content of the tweets, ours is a novelattempt from a different, but also complementary,angle.Within the used dataset, we found there are dif-ferences around exposure, characteristics of accountsspreading fake news and the tone of the content.Those findings suggest that it is indeed possible tomodel and automatically detect fake news. We planto replicate and validate our experiments in an ex-tended sample of tweets (until 4 months after theUS election), and tests the predictive power of thefeatures we found relevant within our sample.
Author Disclosure Statement
No competing financial interest exist.
References Connolly K, Chrisafis A, McPherson P,Kirchgaessner S, Haas B, Phillips D, HuntE, Safi M. Fake news: an insidioustrend that’s fast becoming a global prob-lem. The Guardian 02 Dec 2016;
Accessed: 2017-05-03. Fact check now available in GoogleSearch and News around the world. https://blog.google/products/search/fact-check-now-available-google-search-and-news-around-world/ .Accessed: 2017-05-20. News feed FYI: New test with related arti-cles. https://newsroom.fb.com/news/2017/04/news-feed-fyi-new-test-with-related-articles/ .Accessed: 2017-05-15. Hillary Clinton blames the Russians, Facebook,and Fake News for her loss. http://fortune.com/2017/05/31/clinton-fake-news/ . Accessed:2017-05-31. Gottfried J, Shearer E. News use across so-cial media platforms 2016. Technical Report202.419.4372, Pew Research Center 2016. URL . Silverman C. Lies, damn lies and viral content.Technical report, Tow Center for Digital Journal-ism 2015. doi:10.7916/D8Q81RHH. Morris M, Counts S, Roseway A, Hoff A, SchwarzJ. Tweeting is believing?: understanding mi-croblog credibility perceptions. In Procs. ACM2012 Conf. Computer Supported CooperativeWork. 2012, 441–450. Rubin VL, Chen Y, Conroy NJ. Deception detec-tion for news: three types of fakes. In Proceedingsof the 78th ASIS&T Annual Meeting: Information8cience with Impact: Research in and for the Com-munity. 2015, 83:1–83:4. Amador J, Oehmichen A, Molina-Solana M. Vi-ral tweets with fakenews on 2016 US electionday. http://dx.doi.org/10.5281/zenodo.10488202017. doi:10.5281/zenodo.1048820. Allcott H, Gentzkow M. Social media and fakenews in the 2016 election. Technical Report 23089,National Bureau of Economic Research 2017. doi:10.3386/w23089. Conroy NJ, Chen Y, Rubin VL. Automatic decep-tion detection: Methods for finding fake news. InProceedings of the 78th ASIS&T Annual Meeting:Information Science with Impact: Research in andfor the Community. 2015, 82:1–82:4. Pennycook G, Rand DG. Who Falls for FakeNews? The Roles of Analytic Thinking, Moti-vated Reasoning, Political Ideology, and BullshitReceptivity 2017. URL https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3023545 . Flynn D, Nyhan B, Reifler J. The nature and ori-gins of misperceptions: Understanding false andunsupported beliefs about politics. Advances inPolitical Psychology 2017; 38(S1):127–150. doi:10.1111/pops.12394. Polage DC, Polage DC. Making up History: FalseMemories of Fake News Stories. Europe’s Jour-nal of Psychology 2012; 8(2):245–250. ISSN 1841-0413. doi:10.5964/ejop.v8i2.456. URL http://ejop.psychopen.eu/article/view/456 . Swire B, Berinsky AJ, Lewandowsky S, EckerUKH. Processing political misinformation: com-prehending the Trump phenomenon. Royal SocietyOpen Science 2017; 4(3):160802. ISSN 2054-5703.doi:10.1098/rsos.160802.16