Images, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and Source Credibility in Social Media
IImages, Emotions, and Credibility: Effect of EmotionalFacial Images on Perceptions of News Content Bias andSource Credibility in Social Media
ALIREZA KARDUNI, RYAN WESSLEN ∗ , DOUGLAS MARKANT ∗ , and WENWEN DOU ∗ , Uni-versity of North Carolina at Charlotte, United StatesImages are an indispensable part of the news content we consume. Highly emotional images from sources ofmisinformation can greatly influence our judgements. We present two studies on the effects of emotional facialimages on users’ perception of bias in news content and the credibility of sources. In study 1, we investigatethe impact of happy and angry facial images on users’ decisions. In study 2, we focus on sources’ systematicemotional treatment of specific politicians. Our results show that depending on the political orientation ofthe source, the cumulative effect of angry facial emotions impacts users’ perceived content bias and sourcecredibility. When sources systematically portray specific politicians as angry, users are more likely to findthose sources as less credible and their content as more biased. These results highlight how implicit visualpropositions manifested by emotions in facial expressions might have a substantial effect on our trust of newscontent and sources.Additional Key Words and Phrases: emotion, images, facial expressions, misinformation, credibility, bias
ACM Reference Format:
Alireza Karduni, Ryan Wesslen, Douglas Markant, and Wenwen Dou. 2018. Images, Emotions, and Credibility:Effect of Emotional Facial Images on Perceptions of News Content Bias and Source Credibility in Social Media.1, 1 (March 2018), 23 pages. https://doi.org/10.1145/1122445.1122456
Following the turbulent 2020 United States presidential election, the storming of the capitol byrioters was a shock to many worldwide observers. Observing such an incident, we might ask whatcaused rioters to not believe the integrity of the election. More specifically, what causes audiencesto find sources producing such misinformation as credible, trust these sources, and consequentlybelieve their messages? In fact, recent work has shown that beliefs of content accuracy might notnecessarily correlate with beliefs of source credibility and truthfulness [37]. This is a very complexproblem with many interconnected aspects concerning the sources of news and their agendas, thespecific strategies and characteristics they use to produce content, and the cognitive processes ofcontent consumers [9, 38].News sources curate their text and images with specific styles, tones, and emotions. Mainstreamnews media and misinformation sources take advantage of highly emotionalized content to influencetheir audiences [1, 27, 36, 38]. Various studies have shown that these strategies are often effective.People are more likely to be drawn to highly emotionalized news, click on articles associated with
Authors’ address: Alireza Karduni, [email protected]; Ryan Wesslen, [email protected]; Douglas Markant, [email protected]; Wenwen Dou, [email protected], University of North Carolina at Charlotte, Charlotte, North Carolina, UnitedStates.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without feeprovided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice andthe full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requiresprior specific permission and/or a fee. Request permissions from [email protected].© 2018 Association for Computing Machinery.XXXX-XXXX/2018/3-ART $15.00https://doi.org/10.1145/1122445.1122456 , Vol. 1, No. 1, Article . Publication date: March 2018. a r X i v : . [ c s . H C ] F e b Karduni et al. extreme emotions, and share headlines with more negative sentiments [3, 26]. Emotional contentis also linked to virality on social media [2]. Furthermore, participants’ self-reported experience ofheightened emotion such as elevated anger or sadness increases their perceived accuracy perceivedaccuracy of false news of false news but not of factually correct news [16].In addition to emotional textual content, news and misinformation sources take advantage ofsocial media’s highly visual nature by deliberately choosing images that amplify the persuasivepower of the text content [38]. Examples of such visual content include images that communicateracist concepts not present within the text [20], politicians in the US and Europe [17], misleadingimages of historical diseases [8], and deceptive deep-fake images [33]. Images are also shown toimpact how users judge the credibility of information sources in several ways. For example, afterseeing an image of a brain, users are more likely to believe claims in certain scientific articles [18, 29]while images of smoking increases belief in messages in warning signals [31]. Articles accompaniedby alarming images [14] or ones that depict victimization [39] are shown to increase users’ selectiveinteraction with the articles. Emotional content in images also impacts the likelihood of peoplebelieving a statement. For example, being exposed to highly negative emotional images aboutdifferent phenomena increases the believability of news content[34]. In comparison to positiveimagery, being exposed to negative imagery has been associated with a greater perceived accuracyof false information[25].This paper investigates how the accumulation of positive or negative facial expressions in imagesof social media posts affects users’ judgments about content bias and source credibility . Sourcecredibility is a primary factor in the persuasiveness of news sources [7, 37]; while, content bias,defined as “having a perspective that is skewed” is also a distinct element in determining sourcecredibility [37]. We seek to answer the following questions:
1) how news content accompanied byimages with happy (positive) or angry (negative) facial expressions impact users’ judgements aboutthe credibility of content and news source; and
2) how users’ prior attitudes towards political identitiesimpacts their judgements in light of emotional facial depictions of those identities in news content . Toanswer these questions, we conducted two consecutive preregistered controlled experiments:(1) In study 1, we examined news accounts with highly emotionalized visual content withoutfocusing on specific topics or politicians. We investigated how angry or happy facial expres-sions in images impact users’ judgements about multiple news accounts. Specifically, weevaluated if exposure to highly angry or happy facial expressions impacts users’ perceptionof anonymized right/left leaning and mainstream/misinformation accounts.(2) In study 2, we expanded our exploration of how emotions in images impact users’ judgmentsabout sources. We focused on sources’ with highly emotionalized visual coverage of specificinfluential politicians such as Donald Trump or Angela Merkel. Furthermore, we investigatedhow users’ prior attitudes towards each politician interact with sources’ angry or happyportrayal of those politicians when making judgement about sources.For both studies, in addition to quantitative analysis, we qualitatively analyzed users’ commentsabout their decisions to interpret the subtleties of users’ decisions outcomes under emotionalizedvisual information. In study one, we observe that users find sources propagating tweets withangry facial expressions as less credible. However, we noticed that this effect is in general reducedby sources’ political orientation. We suspected that this reduction could be due to differences inemotional facial expressions of different topics covered by sources in which a source might covernews about one person with angry imagery and others with happy. In study two, we found thatwhen users are exposed to continuous angry coverage of specific politicians, they are more likely toperceive the content as systematically biased and the source as less credible. These studies highlightthe importance of visual information in studying and combating misinformation on social media , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 3 and the extent to which highly emotionalized visual coverage by sources might impact users’ trustin both mainstream and misinformation media across the political spectrum.
Considering users’ perception of source credibility as a function of the content they consume, weformulate the overall structure of the studies as:(1) Users are instructed to go through multiple sets of curated social media posts from anonymized sources of news.(2) Users are asked to evaluated bias for multiple posts from one source before evaluatingcredibility.(3) In order to take individual differences in the amount of information a user needs to cometo a decision into account, we allow users to form opinions after reading and evaluating anon-fixed number of posts. In other words, users are instructed to view as many posts asthey need.(4) After deciding they have viewed and evaluated enough posts from a source, users are in-structed to provide a credibility rating of the source.This structure is repeated for both studies 1 and 2. The randomized treatments are applied byaltering the content users see from each source (details of each controlled experiments is providedin each respective section). Another important aspect of this study is also observing how users’uncertainty changes under different conditions. Tormala and Petty argue that more certain attitudesand judgements have important consequences including guiding behaviors, resistance to persuasion,and being persistent over time [32]. Thus, in addition to eliciting perceptions of content bias andsource credibility, we ask users’ to provide uncertainty about their judgement. Finally, We alsocollect users’ ratings on sources’ political orientations. Finally, we ask users to provide us with ashort text description of how they arrived at their decision.
As suggested by Karduni et al. [10], we aimed to allow users to evaluate bias and credibility on aspectrum as opposed to a binary choice. We elicit users’ perceived bias as a continuous numberbetween 0 (unbiased) and 1 (biased). We also elicit users’ uncertainty around their decision as aconfidence interval in the same range. We elicit users’ perceived credibility of a source as a numberbetween 0 (not credible) and 1 (credible). We elicit users’ perceived political orientation of a sourceas a number between -1 (liberal) to 0 (center) and 1 (conservative). For both variables, we elicitusers’ confidence as a confidence interval range within each respective domain.To elicit users’ beliefs and uncertainty about the perceived bias, credibility, and political orienta-tion variables as a continuous value rather than a binary choice. We adopted a modified versionof the Line + Cone technique introduced by Karduni et al. [12] The Line + Cone technique wasspecifically designed to elicit users’ beliefs about correlations between two variables. Within thatstudy, users were instructed to first select a line that best represents their belief and then draw arange that encodes other plausible alternatives to their beliefs. In other words, this method firstallows users’ belief as a numerical value and a range denoting their uncertainty around the decision.In this study, we introduce the Line + Range that adopts a similar design approach (See Figure1). The Line + Range method enables eliciting users’ choices in different continuous spectrumssuch as bias, credibility, or political orientation. This method allows us to elicit users’ perceptionsas a continuous measure between bounded values while eliciting a range highlighting users’uncertainties in their decisions. , Vol. 1, No. 1, Article . Publication date: March 2018.
Karduni et al.
A: User hovers mouse over Line + Range to choose the line that best represents their belief and then clicks to make a choice.B: User hovers the mouse to choose a range of other plausible linesto their belief. This image shows a user who is highly certain in their judgement. This image shows a user who is very uncertain in their judgement.
Fig. 1. The Line + Range elicitation method.
In this study, we started with the same datasets used in [10, 13] which included a collection of tweetsfrom multiple mainstream sources of news, as well as multiple accounts labeled by third-partysources as being likely to produce hoaxes, propaganda, or in general fake news [36]. To create adataset of tweets that include facial expressions, we processed the dataset mentioned above throughseveral computational methods. First, we used the python face-recognition library to identifyimages that include faces. To identify images of different politicians, we used Google’s FECNet [28]to extract feature vectors from all faces in the images and used the HDBScan clustering algorithm[19] to cluster the images of faces. We then manually identified prominent clusters of influentialpoliticians.To extract emotion scores from images, we utilized two readily available libraries, which providedus the ability to sort the images based on different emotion predictions [30] . After generating alarger candidate dataset, we manually excluded low-resolution images, were inaccurately assignedto a cluster, or were mislabeled by the emotion prediction libraries. We conducted Both studies 1 and 2 using a custom interactive interface developed with ReactJS and multiple javascript libraries such as D3.js for the Line + Range elicitation technique. Theresponses from users were collected in a MongoDB database through a NodeJS backend hosted onHeroku. The interface first prompts users with a consent form per IRB requirements. After clicking https://pypi.org/project/face-recognition/ https://github.com/thoughtworksarts/EmoPy and https://pypi.org/project/deepface/, Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 5 on consent, users are randomly assigned to their predefined conditions as specified by the study.Immediately after providing consent, we provide users with a brief demographic questionnaire. Thenext page provides an instruction to the Line + Range elicitation technique. After the elicitationtechnique instructions, the study interface provides instructions on the study’s goals, i.e., describingCredibility and Bias and the choices users need to take. Consecutively, The interface routes usersto the main study page. Based on their random condition assignment, they will go through pagesresembling a social media page. Users go through tweets one by one, using the Line + Range methodto provide a bias rating. Users are prompted to click on a “Make a decision” button that will openup a pop-up window with multiple questions about credibility, political orientation, and an opentext box for comments on each users’ decision. After providing ratings for one account, the studypage is refreshed with a new round of tweets from a new account. This process is repeated until allaccounts are evaluated. In the final stage, users are provided with a brief questionnaire to recordtheir self-described political leanings, as well as two open-ended attention check questions. Thefinal page of the study provides users with a debriefing on the study, as well as unique a uniquetoken for study incentive processing purposes. Our experimental design for both studies consisted of repeated measures for each user withintwo levels of responses including source level responses (credibility) and tweet level responses(bias). Furthermore, users’ responses were continuous variables bounded between 0 and 1. For bothstudies, we used mixed-effects beta regressions to address the hierarchical design of our studiesand the bounded dependant variables. Within the text, model coefficients as log of odds ratios arereported with corresponding confidence intervals. In the model figures, for ease of interpretation,we transformed the log-odds ratios to odds ratios. Odds ratios show the direction and strength ofhow each independent variable impacts the dependent variables. We used the normal approximationto calculate p-values of fixed effects and t-values produced by lme4 .In both studies, To measure the effects of our experimental design conditions, we built twomixed-effects models using R’s glmmTMB for a beta regression.
First, we used Non-Negative Matrix Factorization (NMF) [5] to extract topics from the comments.We then qualitatively evaluated the topics through each topics’ top-terms and top documents.Topics were then qualitatively categorized into themes based on their similarity. Each qualitativeanalysis section, includes samples of comments most related to each theme.
In Study 1 we examined how exposure to positive (happy) and negative (angry) emotions in imagesinfluence perceived content bias (tweets) and source credibility (accounts). Participants completeda task in which they observed a series of tweets from eight different accounts. Text data (tweets)was collected from Twitter streaming API from October 25th to January 25th, 2018 from multiplenews accounts accounts [11]. The accounts are either labeled as mainstream (e.g., NYTimes,NYPost, CNN, and Fox News) or known to produce misinformation (e.g., Breitbart, amLookout,investWatchBlog). The accounts are also categorized as being right-leaning or left-leaning . Giventhese two dimensions of mainstream/misinformation and right/left political orientation, usersevaluate a total of eight accounts (See table 1), two from each of four different categories ofmainstream-left, mainstream-right, not misinformation-left, and misinformation-right. All images Karduni et al.
ConsentDemographicQuestionnairepostquestionnaireInstructions
Happy
Angry Mixed
Tweet Text Tweet Text
Tweets are sorted to showhappy facesTweets are sorted to showangry faces
Happy
Study1 Conditions -8 different Twitter Accounts-For each condition, tweets are sorted based on facial emotions -The content shown to users is real tweets from each source but different in each condition.
AngryMixed
Debrieefing
Each user randomly assigned to
Tweet Text Tweet Text
User clicks to view more tweetsUser clicks to view more tweetsUser clicks to view more tweets
Tweet Text Tweet Text
Fig. 2. Study 1 conditions and process included facial expressions, and were scored on happiness and anger using multiple supervisedmachine learning algorithms (for more info refer to the methodology section). The content shownto users was manipulated in a between-subjects manner. Users’ were randomly assigned to oneof three conditions: happy , angry , and mixed . The happy condition receives tweets that containimages with the highest happy score. The angry condition receives tweets containing images withthe highest angry image score. The mixed condition alternates between happy and angry images.Furthermore, one account from each category is randomly selected to include no images. Forexample, a participant assigned to a happy condition always views tweets sorted with the highesthappy rated images but does not see images for four of the accounts (see Figure 2).For each account, users first evaluate content bias in individual tweets. Participants record theirbelief and uncertainty about the bias of each tweet. By clicking on view more tweets, participantswill view an extra tweet until they come to a decision about the source (a minimum of 5 tweets wasenforced for each account). By clicking on “Make a Decision,” users see a pop-up view in which weelicit their perceived credibility and political orientation of each source. Users also have the optionto use a text box to describe what influenced their decisions. In Study 1, we hypothesize that in comparison to the tweets with happy images, users are morelikely to assess tweets with angry images as biased . Furthermore, we also hypothesize that ascompared to the Mixed condition, users in the angry condition are more likely to assess sources asless credible. On the other hand, we hypothesize that users in the happy condition will be morelikely to perceive tweets as less biased and sources as more credible. However, since it is likely pre-registeration: https://aspredicted.org/blind.php?x=9rn6i7, Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 7 Table 1. 8 Twitter accounts that users evaluate in study 1. Each user goes through content sorted based onemotions in facial expressions collected from tweets’ images.
Source name Type Political Orientation@veteranstoday misinformation left@amlookout misinformation right@opednews misinformation left@InvestWatchBlog misinformation right@MotherJones mainstream left@nypost mainstream right@cnnPolitics mainstream left@Jeresulem_Post mainstream rightfor users’ judgment to be influenced by the inherent differences in the accounts, we also explorethe effects of political orientation (right vs. left) and source type (mainstream vs. misinformation)on their judgements. In summary, we evaluate the potential effects of image emotion, sourceorientation, and source type on users’ uncertainty around their choices.
We considered four total dependent variables (DV): (1) content bias (bounded value between [0,1]),(2) uncertainty range around content bias (bounded value between [0,1]), (3) source credibilitychoice (bounded value between [0,1]), (4) uncertainty around source credibility choice (boundedvalue between [0,1]). For our independent variables (IV), we included the image emotion condition(angry, happy, or mixed), image shown (true or false), as well as political orientation (right or left)and source type (mainstream or misinformation). For models build with bias choice / uncertaintyas the dependent variable, since the model is build on tweet level responses, there are only twoimage emotion conditions of happy or angry.
For each model, we included users’ unique id and the source name as random effects. Aftercomparing multiple model specifications using AIC, we also included interaction terms betweensources’ political orientation and image emotion. The reference conditions for credibility choice anduncertainty models are image emotion = mixed, source orientation = left, source type = mainstream,and image shown = False. The reference conditions for bias choice and uncertainty models areimage emotion = happy, source orientation = left, source type = mainstream, and image shown =False.
In study 1, we recruited a total of 81 (52 female, 28 male, and one other) university students withan average age of 21 years old. Per our pre-registration, we excluded responses from 9 participantswho showed missing responses (due to unexpected technical difficulties), resulting in 72 acceptedresponses. 30 Participants were randomly assigned to the angry condition, 24 participants wereassigned to the happy condition, and the remaining 18 were assigned to the mixed condition.Participants took an average of 26 minutes to complete the study. All participants either receivedeither course extra credits or required research credits as incentives. , Vol. 1, No. 1, Article . Publication date: March 2018.
Karduni et al. source_orientation [right] * image_emotion [angry]source_type [misinformation]source_orientation [right]image_shown [yes]image_emotion [angry](Intercept)
Odds Ratios
Bias Choice
Odds Ratios
Bias Uncertainty
Bias Choice is a continuous variable between0 (not biased) and 1 (biased) Bias Uncertainty is a continuous variable between 0 (certain) and 1 (uncertain)
Fig. 3. Study 1 fixed effects odds ratios for bias choice (left) and bias uncertainty (right). Error bars indicate95% confidence intervals. Asterisks indicate statistical significance than zero using p-values: *** 99.9%, **99%, * 95%. For image_emotion, the reference category is happy. For image_shown, the reference categoryis no, left is the reference condition for source_orientation, and mainstream is the reference condition forsource_type.
Beliefs and uncertainty of content bias:
We used two mixed-effects beta regressions to study effects on users’ beliefs about the bias ofindividual tweets (See Fig 4-left). Users found content from right-leaning accounts to be morebiased ( 𝛽 = . [ . , . ] 𝑧 = . , 𝑝 < . 𝛽 = . [ . , . ] 𝑧 = . , 𝑝 < . 𝛽 = . [ . , . ] 𝑧 = . , 𝑝 < . 𝛽 = − . [− . , − . ] 𝑧 = − . , 𝑝 < . ( 𝛽 = . [ . , . ] 𝑧 = . 𝑝 < . ) . Wealso found a similar positive effect effect for misinformation accounts ( 𝛽 = . [ . , . ] , 𝑧 = . , 𝑝 < . 𝛽 = . [ . , . ] , 𝑧 = . , 𝑝 < . 𝛽 = − . [− . − . ] 𝑧 = − . , 𝑝 < . , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 9 source_orientation [right] * image_emotion [angry]source_orientation [right] * image_emotion [happy]source_type [misinformation]source_orientation [right]image_shown [yes]image_emotion [angry]image_emotion [happy](Intercept) Odds Ratios
Credibility Choice
Odds Ratios
Credibility Uncertainty
Credibility Choice is a continuous variable between0 (not credible) and 1 (credible) Credibility Uncertainty is a continuous variable between 0 (certain) and 1 (uncertain)
Fig. 4. Study 1 fixed effects coefficients for credibility choice (left) and credibility uncertainty (right). Errorbars indicate 95% confidence intervals. Asterisks indicate statistical significance than zero using p-values: ***99.9%, ** 99%, * 95%. For image_emotion, the reference category is mixed. For image_shown, the referencecategory is no, left is the reference condition for source_orientation, and mainstream is the reference conditionfor source_type.
Beliefs and uncertainty of source credibility:
We used mixed-effects beta regression to investigate the effects of study conditions on users’beliefs about source credibility. We found that in reference to left-leaning accounts, users are morelikely to rate right-leaning accounts as less credible ( 𝛽 = − . [− . , − . ] , 𝑧 = − . , 𝑝 < . 𝛽 = − . [− . − . ] , 𝑧 = − . , 𝑝 < . 𝛽 = − . [− . , − . ] , 𝑧 = − . , 𝑝 < . 𝛽 = . [ . . ] , 𝑧 = − . , 𝑝 < . 𝛽 = − . [− . , − . ] , 𝑧 = − . , 𝑝 < . For study 1, each user had the option to answer one open-ended question for each account asking“please describe how the tweets (text and images) influenced your decisions about this account?.”Even though leaving comments was an optional part of the study, we received comments from all 72participants and the majority of participants left comments for all 8 trials. In total, we collected 572comments about users’ decision-making influences. Since thematic analysis with a large number , Vol. 1, No. 1, Article . Publication date: March 2018.
Table 2. Study 1 topic model of users’ comments. Shows a table of 20 extracted topics sorted based on numberof unique users with comments assigned to each topic. [4].
Topic ID Count users Count Comments Top Topic Terms Theme2 37 53 tweets , based , unbiased , tweets were biased , political Facts vs. opinions11 29 73 account , left , right , leaning , credible Source attitude & political orientation19 27 49 wing , right wing , right , left wing , clearly right Source attitude & political orientation1 27 45 political , orientation , political orientation , tweets were political , credibility Source attitude & political orientation6 24 35 source , credible source , credible , overall , opinionated with tweets Facts vs. opinions4 24 34 facts , stating facts , stating , titles , article titles Facts vs. opinions7 18 30 bias , credible , report , bias and direct , tell Source attitude & political orientation3 17 24 opinions , presented , facts , opinions presented , accounts Facts vs. opinions10 16 26 like a news , news , news source , like , source Source attitude & political orientation0 15 30 biased , tweets were biased , appear , tell , lean Source attitude & political orientation9 14 25 images , tell , provided , meme , use of images Images and Pictures15 12 17 opinionated , appear , articles appear , articles , opinionated with tweets Facts vs. opinions8 11 19 factual , factual tweets , report , pretty factual , gave were biased Facts vs. opinions5 10 19 pictures , tell , words , pictures influenced , think the pictures Images and Pictures17 6 8 clickbait language , language , zoomed in pictures , headshots , held Language usage and tone13 5 10 stance , political stance , nt , tweets didnt , issues Source attitude & political orientation16 4 9 associations , influence , helped , influence my decisions , decisions Language usage and tone18 4 6 variety , variety of tweets , wide variety , wide , misspelling Language usage and tone12 3 8 non biased , non , biased tweets , non biased tweets , pertaining to politics Source attitude & political orientation14 3 7 subjective , subjective language , language , tweet , highly subjective Source attitude & political orientation of comments is challenging, we used topic-modeling to facilitate the qualitative analysis of thecomments and arrive at different themes of influences on users’ judgments. We categorized theextracted topics into four general themes: 1) source attitude or political orientation, 2) opinionatedversus factual reporting, 3) specific language usage or tone, and 4) the effect of images. In thissection, we will summarize each of these themes and offer a few example comments provided byparticipants.
Attitude or political orientation:
The majority of comments in study 1 included mentions ofgeneral perceptions of source attitude such as political bias in sources or lack thereof. Topic 11with a size of 79 documents from 29 unique users included such comments. For example, one userwrote : “This account was talking about the right ruining everything so it has to be more left-leaningand it didn’t sound complete out there so I don’t it was 100% not credible but it definitely isn’t crediblebecause it is biased.”
Another comment mentioned a similar comment but for right-leaning sources: “These tweets made it clear that anyone on the left were out of their minds, and anyone on the rightwas perfectly fine, so it’s clear this is a right-leaning account. As for credibility, [...] I don’t know, somepart of that sort of tone doesn’t seem too credible to me.”
There were also more mentions of more specific source attitudes. Topic 19 with 49 commentsfrom 27 users included such comments. For example, one comment took a repeated focus onIsrael related topics as the basis for their judgement: “lots about zionism and israel, usually doesn’tget talked about in right wing media.” . Another user described harsh attitudes towards presidentTrump as their rationale: “The Tweets were left-wing orientated with its expression on frustrationand opposition to the Trump administration. The rhetoric was a bit harsh and seemed to be attackingTrump administration/right-wing ideology so it was bias and less credible.”
Opinionated versus factual reporting:
Another prevailing theme in users’ comments was acontrast between perceptions of opinionated vs factual reporting. Topics 2 (53 comments from37 users), 6 (35 comments from 24 users), and 4 (34 comments from 24 users) included severalcomments related to this theme. For example, several comments mentioned how sources containedopinionated language: “These tweets seem rather opinionated” and “Not a credible source, seems veryopinionated with tweets.”
On the other hand, sources that were deemed to report ‘facts’ were considered to be more credibleas described by one of our participants: “I found this to be a credible source because the tweets seemedfactual.”
Another user mentioned the factual tone and a center-leaning political orientation, as the , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 11 basis of their decision: “This one seemed the most credible to me. The political orientation seemed to bepretty even and the article titles seemed to be based on facts.”
Specific language usage or tone:
There were a group of comments that highlighted how userssometimes take specific word usage, negative and positive sentiment, and general tones as thebasis for their decisions. These comments related mostly to writing style, word choice, and stronglanguage as the basis for users’ decisions. Topics 16 (9 comments from 4 users), 17 (8 comments from3 users), and 14 (7 comments from 3 users) included such comments. Some comments mentionedspecific words or phrases as cues for their decisions: “Alarmist language “out of control homicides”“what you need to do to be safe””
And, “They are using language such as "Looney Left" and theunflattering close-ups of democratic representatives”.
Another user noticed using all caps writingstyle as a rationale for their decision: “Alarmist language, clear/strong opinions/writing in all caps” .Finally, A group of comments also mentioned "clickbait" language as the basis for their decisions: “...As for credibility, the way the tweets were written seemed "cheap" to me, like they were clickbait andjust meant to draw anyone in based on shock value. I don’t know, some part of that sort of tone doesn’tseem too credible to me.”
Effect of images:
Topics 9 (25 comments from 14 users) and 5 (19 comments from 10 users)included comments that mentioned images as an influence for their decision. These comments awide range of observations from unflattering imagery, to facial-expressions and images being notserious. For example, a user mentioned how the images influenced their decision in an oppositedirection of the text: “Some of the tweets seemed to be about data rather than opinions. Some of thepictures seemed to take away merit.”
Another comment explicitly mentioned facial expression ofindividuals helping them decide that the tweets have a left bias: “Many of the tweets seemed to becredible as most were quotes by others. Some of the pictures had facial expressions that made the tweetsseem left swinging.”
A number of comments cited comic or not serious imagery as the basis for their decisions. Forexample: “Difficult to take the meme-like images seriously.”
And, “The images were cartoonish anddifficult to take it seriously. It was obviously making fun of trump.”
A few comments mentioned unprofessional or unflattering images as the basis for their decisions: “This account seemed biased towards Palestinians and Israelis. There were compliments of them andsomewhat unflattering pictures of those who either commented against them or who may do (or not do)something against them.”
And, “Some of the pictures were not professional and showed the presidentand alliances in a negative light.”
These comments show that at least for a group of users, images sometimes serve as primaryevidence for a sources’ lack of credibility. Although, majority of comments about pictures andimages, did not specifically mention emotions in images as the basis for their decisions. This mightbe due to the fact that tweets in our dataset contained a diverse set of topics. Maybe, users look fora more systematic negative treatment of specific topics or individuals, rather than a combination ofnegative imagery from a wide ranging topics.
The main motivation behind this study was to assess how user judgements about bias in contentand credibility of the source are affected by (the accumulation of) emotions in social media images.Users were randomly assigned to three groups of angry, happy, or mixed emotions. Each groupsaw content from 8 sources sorted based on the specified emotion. We hypothesized that theaccumulation of tweets with angry images would increase the perceived content bias and reducethe perceived source credibility. We also hypothesized that happy images would lead to a reverseeffect of a decrease in perceived bias and an increase in the perceived credibility of sources incomparison to a condition where happy and angry content is shown interchangeably. We found , Vol. 1, No. 1, Article . Publication date: March 2018.
Fig. 5. Mean and bootstrapped 95% confidence interval of users’ responses for each account. partial evidence for our hypotheses: we observed an increase in perceived bias and a decrease ofperceived credibility of users in the angry image emotion condition. However, we did not findevidence of a reverse effect of happy emotion in comparison to a mix of content.We also observed a noticeable reliance on information found from texts. The effect of angryemotion was somewhat reduced for right-leaning accounts for both content bias and sourcecredibility (See figure 5). One explanation for this reduced effect could lie within the difference intone, language, and topical focus of the sources between tweets with angry and happy imageryfrom these sources. Taking nypost as an example, we can see from the content that the tweetsthat came with happy images mostly focused on celebrities. For example, one of the first tweetsusers reads in the happy condition has a picture of a smiling non-famous man and reads “Teensues over paramedic allegedly fondling her breasts after seizure” . For nypost, the tweets in the angrycondition are more politically charged. For example, one of the first tweets in this condition isabout a republican politician, Roy Moore with a frowning picture of him looking down. The tweetreads: “Trump [is] concerned about Roy Moore allegations: White House aide” . Users’ comments helpwith reinforcing this observation. One comment about nypost from the happy condition mentioned “It seems to be a celebrity tabloid account. I honestly think it is a parody of a tabloid account.” while acomment from a user assigned to an angry condition for nypost paints a much different picture ofthe source: “Headlines feel relatively neutral though maybe slightly conservative but no titles seemoverly exaggerated or extreme.” . This observation helps us hypothesize that the topic and the textcontent of news are likely to be of primary importance in affecting users’ judgements about sourcecredibility and content bias.Furthermore, we made another observation from Study 1. Although the content in study 1 wassolely sorted based on emotions in images and we did observe the angry condition to significantlyimpact users’ judgements, we did not observe an effect of the presence of images on the outcomes(See Figure 4). The qualitative analysis of users’ comments might help us explain this observation.We can see that three of the four major themes were primarily related to the text content of thetweets. The themes highlight a series of heuristics utilized by users such as tone, attitude towardspolitical parties, and unflattering images. From these comments, we can see that users might heavilyrely on cues from the tweet texts to judge bias in the content and credibility of sources. Moreover, , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 13
Happy
Study 2 Conditions - 8 different collecetion of tweets about different personalities- The text and order of tweets is kept constant for all conditions
AngryNo Image
ConsentDemographicQuestionnairepostquestionnaireInstructions
Happy Angry No Image
Debrieefing
Each user randomly assigned to
Tweet Text Tweet Text
Images are switched to showhappy emotionsImagess are switched to show angry emotionsNo Images are shown to users
Tweet Text Tweet Text
User clicks to view more tweetsUser clicks to view more tweetsUser clicks to view more tweets
Tweet Text Tweet Text R e p e a t e d f o r t w ee t s s e t s Fig. 6. Study 2 conditions and process it is possible that the emotional content in images is chosen by sources to match the textual contentand that might result in not observing an effect. Furthermore, the text and topical focus fromthese conditions are different for every tweet. Our qualitative analysis hints that users might besensitive to specific cues from specific tweets and that might overweight the effect of accumulationof emotions in images.Finally, we observed that users’ uncertainty around their judgement credibility was noticeablyreduced for misinformation news sources. In other words, users were on average more confidentin their decisions when rating the credibility of misinformation sources. One way to interpretthis result is by considering the differences between misinformation and mainstream sources.Previous studies have shown that misinformation sources take use of more angry text, and areconsidered by users to be more opinionated [13, 36]. Given this difference between mainstream andmisinformation sources, we can assert that content from misinformation sources might containmore cues in text and images for users to make more certain decisions. Of course, this needs to befurther investigated in future studies.
Motivated by the results of study 1, we developed study 2’s experimental design to measure theimpact of emotions in images on users’ judgements, as well evidence that sources systematicallyportray different politicians with different emotional facial expressions [21]. In study 2, we kept thetext content shown to users same for all conditions and controlled images as either happy, angry,or no images. This requires each set of tweets to focus on a specific person and by switching angryand happy images of that person, we could measure the effect of systematic usage of negative(angry) or positive (happy) facial expressions on users judgements. Additionally, this experimentaldesign allowed us to investigate the interactions between users’ prior attitudes towards different , Vol. 1, No. 1, Article . Publication date: March 2018.
Table 3. Politicians selected for study 2
Politician NotesDonald Trump Former President of the USHillary Clinton Former US Secretary of StateBarack Obama Former president of the USTheresa May Former prime minister of the UKEmanuel Macron President of FranceAngela Merkel Chancellor of GermanyKim Jong Un Supreme Leader of North KoreaVladimir Putin President of Russiapoliticians with the emotional content of images on their judgements about bias and credibility(see Figure 6).
We curated a dataset containing tweets on eight different politicians including Donald Trump,Hillary Clinton, Angela Merkel, and Emanuel Macron (see table 3). Users were instructed that alltweets mentioning each politician are from a unique source. To control for perspective variance, westart by collecting the text or this study from mainstream sources. In order to limit the impact of textcontent on users’ decisions, we downselected tweets with the following steps: we first conductedsentiment analysis on the tweet texts using Vader Sentiment [6]. Next, for each set of tweets, weselected tweets with the highest neutral sentiment scores. Finally, we manually evaluated the tweetsand removed tweets with inaccurate scores from the sentiment analysis library. This resulted intweets about 8 different politicians, from mainstream news sources, that were mostly of neutraltone. The images for this study are manipulated in a between-subjects manner in which userssaw either happy, angry, or no images. For example, a user in the Happy condition, views tweetsmentioning Hillary Clinton and are accompanied by happy images of her, while a user in the Angrycondition evaluated the same tweets but with angry images of Hillary Clinton, and the control(no-image) condition will view no images.Other than changes in study design, the procedures of the study were equivalent to Study 1 withone exception. For each set of tweets mentioning one of the eight politicians, users first answeredtwo questions in a pop-up form about their familiarity and favorability of that person in a 5-levelLikert scale.
Since study 2 is a continuation of study 1, we expect to see a similar effect of angry emotions on biasand credibility scores. Moreover, we also hypothesize that users’ favorability toward each politicianinteracts with the effects of image emotions such that if users are exposed to news focusing onspecific figure, their perception of bias and credibility is affected by both their prior favorabilityof that person and the emotion shown to the person . More specifically, we hypothesize thatfavorability negatively interacts with angry emotion and positively interacts with happy emotionto predict users perceived bias and credibility. Furthermore, We will also investigate the impact ofusers’ familiarity with each politician on their judgements. pre-registeration: https://aspredicted.org/blind.php?x=9js6d5, Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 15 The Dependent variables in study 2 are identical to study 1. We considered four total dependentvariables(DV): (1) the tweet bias choice (bounded value between [0,1]), (2) uncertainty range aroundtweet bias (bounded value between [0,1]), (3) source credibility choice (bounded value between[0,1]), (4) uncertainty around source credibility choice (bounded value between [0,1]). For ourindependent variables (IV), we included the image emotion condition (angry, happy, or no image),as well as users’ prior favorability and familiarity towards each politician in the form of 5 stepLikert scales.
For each model, we included users’ unique id and the politician’s name as random effects. Aftercomparing multiple model specifications using AIC, we also included interaction terms betweenusers’ favorability and familiarity of each politician with the image emotion. The omitted referenceconditions are image emotion = no image.
In study 2, we recruited a total of 126 (63 Female, 62 male, and one preferred not to say) participants.The average age of participants was 35 years old. 81 participants were recruited from AmazonMechanical Turk and received a 2 dollar incentive. The rest of the participants were universitystudents who received either research or extra credits for their participation. Per our pre-registration,we excluded responses from 12 participants who showed missing responses (due to unexpectedtechnical difficulties) resulting in 114 accepted responses. 34 Participants were randomly assignedto the angry condition, 42 participants were assigned to the happy condition, and the remaining 38were assigned to the no image condition. Participants took an average of 21 minutes to completethe study.
We used two mixed-effects beta regressions to study theeffects of experimental conditions on users’ beliefs and uncertainty of bias of individual tweets. Wefound that users viewing tweets with angry images rated tweets as more biased in comparison towhen no images are shown to users ( 𝛽 = . [ . , . ] , 𝑧 = . , 𝑝 < . 𝛽 = . [ . , . ] , 𝑧 = . , 𝑝 < . 𝛽 = . [ . . ] , 𝑧 = . , 𝑝 < . Belief and uncertainty of source credibility:
We used mixed-effects beta regression to in-vestigate the effects of study conditions on users’ beliefs about source credibility. We found thatusers rated sources as less credible when tweets are accompanied with angry facial expressions( 𝛽 = − . [− . , − . ] , 𝑧 = − . , 𝑝 < . 𝛽 = − . [− . , − . ] , 𝑧 = − . , 𝑝 < . , Vol. 1, No. 1, Article . Publication date: March 2018. image_emotion [angry] * favorabilityimage_emotion [happy] * favorabilityimage_emotion [angry] * familiarityimage_emotion [happy] * familiarityfavorabilityfamiliarityimage_emotion [angry]image_emotion [happy](Intercept) Odds Ratios
Bias Choice
Odds Ratios
Bias Uncertainty
Bias Choice is a continuous variable between0 (not biased) and 1 (biased) Bias Uncertainty is a continuous variable between 0 (certain) and 1 (uncertain)
Fig. 7. Study 2 fixed effects Odds Ratios for bias choice (left) and bias uncertainty (right). Error bars indicate95% confidence intervals. Asterisks indicate statistical significance than zero using p-values: *** 99.9%, **99%, * 95%. For image_emotion, the reference category is no image. image_emotion [angry] * favorabilityimage_emotion [happy] * favorabilityimage_emotion [angry] * familiarityimage_emotion [happy] * familiarityfavorabilityfamiliarityimage_emotion [angry]image_emotion [happy](Intercept)
Odds Ratios
Credibility Choice
Odds Ratios
Credibility Uncertainty
Credibility Choice is a continuous variable between0 (not credible) and 1 (credible) Credibility Uncertainty is a continuous variable between 0 (certain) and 1 (uncertain)
Fig. 8. Study 2 fixed effects Odds Ratios for credibility choice (left) and credibility uncertainty (right). Errorbars indicate 95% confidence intervals. Asterisks indicate statistical significance than zero using p-values: ***99.9%, ** 99%, * 95%. For image_emotion, the reference category is no image.
For our mixed effect model of users’ uncertainty around their decisions, we did not observe asignificant effect of image emotion on users’ uncertainty around their decisions. However, we didobserve that in cases that users’ familiarity with subjects are higher, users are more likely to be morecertain in their decisions ( 𝛽 = − . [− . − . ] , 𝑧 = − . , 𝑝 < . 𝛽 = . [ . . ] , 𝑧 = . , 𝑝 < . , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 17 Table 4. Study 2 topic model of users’ comments. Top shows a table of 20 extracted topics sorted based onnumber of unique users who had comments associated with each of the topics. [4].
Topic ID Count users Count Comments Top Topic Terms10 51 92 tweets , factual , opinion , information , little0 40 67 left , leaning , left leaning , right , news8 35 51 like , feel , feel like , sound , sound like19 34 71 facial expressions , expressions , facial , angry , angry facial3 33 46 negative , positive , negative light , ones , light16 32 66 images , given , negative bias , bad , text4 31 46 bias , given , news , way , statements6 31 50 political , orientation , source , political orientation , credibility9 31 47 neutral , pretty neutral , fairly neutral , pretty , fairly2 27 38 biased , wording , read , rest , tweets5 27 46 facts , stating , stating facts , reporting facts , reporting17 26 45 account , flattering , things , think , felt14 24 47 pictures , unflattering , headlines , unflattering pictures , stories12 22 39 credible , unbiased and credible , tweet , source , tweets seemed credible15 22 28 know , know this person , person , nt know , nt13 19 28 based , fact , fact based , opinion , matter18 19 27 unbiased , tweets were unbiased , unbiased and credible , neutral tone , explain11 13 21 straight , reporting , straight up reporting , factual reporting , point1 12 18 sure , tweet , explain , little bit7 3 8 tweets were worded , worded , pleasant or unpleasant , depended , unpleasant
ThemeFacts vs. opinionsSource attitude & political orientationSource attitude & political orientationImages and PicturesNegative vs. positive attitude towards persoanlitiesImages and PicturesSource attitude & political orientationSource attitude & political orientationLanguage usage and toneSource attitude & political orientationFacts vs. opinionsSource attitude & political orientationImages and PicturesSource attitude & political orientationUser was not sureFacts vs. opinionsSource attitude & political orientationSource attitude & political orientationUser was not sureLanguage usage and tone
In study 2, we collected a total of 881 comments from 116 users. Similar to study 1, we analyzedusers’ descriptions of the rationale behind their decisions using NMF topic modeling [5] andthematic analysis of the documents most representative of each topic. This helped us categorizethe 20 extracted topics into 5 higher-order themes. Three of the themes were similar to the oneswe extracted from study 1, while two are themes that are unique to this study. Since the goal ofthis qualitative analysis is to get an overview of how users conduct their decisions, we will mostlyfocus on the strategies and comments that are new and unique to this study (figure 4 provides anoverview of all the extracted themes and 20 topics).
Negative vs. positive attitude towards politicians:
This theme is mostly represented in Topic3 (with 46 comments from 33 users) with descriptions about each set of tweets’ attitudes towardsthe individual politicians. Many of the comments were about how a source is mostly coveringnegative or positive news about a politician. Often these comments contained a mention of bothtexts and images.For example, a user mentioned how the source was mostly neutral but had some negative tweetsand images about Barack Obama: “There were a couple that seemed to be negative about [Obama]and images were a little negative. Not all were biased but some credibility was lost.”
Another userprovided a similar comment about Donald Trump. “There were a few tweets that had no opinion butthe ones that did had some negative connotations toward Trump.”
Some of the comments actually noticed images as negative and text as more neutral towards theperson. For example, one of the participants provided this comment about Donald Trump: “Theimages paint [Donald Trump] in a negative light but the text wasn’t actually negative.”
Some usersalso mentioned that they did not find any negative attitudes towards a politician. For example, oneof the users found the tweets about Kim-Jong Un to be not negative: “There was nothing too negativeand everything seemed legit. Images were not bad towards him.”
Neutral source attitude:
In study 1, we saw that there were many comments about sourcesbeing biased or having negative tones. In this study, we observed an emerging theme about thesource being more neutral towards a person. Topic 9 (with 47 comments from 31 users) includedmany comments related to this theme. For example, one of the participants found tweets aboutDonald Trump as mostly neutral: “The general tone of the tweets were neutral, informational withlittle or no opinion.”
Another user found tweets about Barack Obama as mostly neutral with a slightliberal bias: “Most of the tweets seem pretty neutral, although in some seem to be more liberal.” , Vol. 1, No. 1, Article . Publication date: March 2018.
Once again, users in some cases found a mismatch between text and images in terms of neutrality.This comment is a users’ perspective about the Vladimir Putin tweet set: “Text was neutral. Theimages were a mix of neutral and deliberating unflattering.”
Effect of Images:
In this study, we observe an increase in image related comments. We identifiedthree topics that contain descriptions from users related to visual information 4). Topic 19 with 71comments from 34 users and Topic 14 with 47 comments from 24 users include mentions of facialexpressions, angry emotions, and unflattering portrayals. For example, a user found the text oftweets as mostly unbiased, and explained how portrayed facial expressions of Hillary Clinton wasthe basis for her judgement: “While the text didn’t involve much biased words, the usage of certainimages of Hillary Clinton depicting her facial expressions in an array of negative emotions showed abiased view, in which the account may have wanted viewers to take that negative emotion they mayperceive through her image and unconsciously use it to influence their perceptions of Hillary Clintonherself.”
Another user provided more similar details about how the images influenced their decisionsabout tweets related to Emanuel Macron: “The usage of images shows him in a negative light, withangry and frowning facial expressions. There were also phrases used like “pulling no punches” thatsuggested some bias.”
Some users also found a combination of specific language usage with “weirdly close up” and“unflattering” images leading them to believe a source is less credible: “The texts were mostly bland,except for “...what do you think”, which is a tabloid-like phrase for me. Photos were weirdly close-upfacial views that were generally unflattering, which makes me wonder a bit about credibility...”
Another interesting set of comments were about users perceiving the images as not correlatingwith the tweets: “Most of the tweets were very normal, but there were a couple that had angry Macronpictures that did not correlate with the headlines presented.”
And “The majority of tweets were unbiasedwith their headlines. Some of the pictures might have been a bit questionable.”
A group of comments included a specific description of how images did not influence theirdecision. Topic 16 with 66 comments from 32 users, includes many such comments. For example,for the collection about Kim-Jung Un, a user mentioned how images were neither negative norpositive: “The images were not the best images nor were they the worst images of him, but there wasstill a negative bias.”
Another user mentioned how images and tweets related to Donald Trumpwere both neutral, and honest looking: “The tweets seemed to be honest and state the honest newsabout what is happening while the images associated with them.”
We asked participants to rate “sources” that each focus on a specific politician. The text wasconstant for all users, while we manipulated the conditions to include either angry or happy facialexpressions of those politicians, or to include no images. We found that in comparison to no imagesbeing present, users in the angry condition found the content to be more biased and the sourcesless credible. However, we did not find a significant effect of happy images on users’ judgement.The difference between the happy and angry condition in our study could be better explained bya study on perceptions of negative or positive portrayals of politicians, in which Lubinger andBrantner found that participants mostly agreed on what constituted as negative, but perceptions ofpositive portrayals were wide varying [15]. This suggests that negative portrayals are likely morecommonly agreed upon and thus might have a stronger effect on users’ judgements.Users’ comments also included many mentions of angry, negative, or unflattering portrayalsfor these politicians. Comparing the results from studies 1 and 2, one might ask, why did we notobserve a clear difference of happy images on users’ judgments? First, it is worth reiterating thatour quantitative and qualitative findings suggest that users’ might put a stronger weight on the , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 19
Fig. 9. Mean and 95% bootstrapped confidence interval of choices for each condition/politician. textual content of tweets when judging bias and credibility of choices. Second, the accumulation ofangry emotions in tweets might signal a more systematic bias towards specific politicians and thuscausing a stronger influence on users’ judgements.We observed that familiarity also impacts users’ judgements of bias and credibility, as well ashow certain they are in their decisions. Recent work on judgements on misinformation suggeststhat prior exposure and familiarity with misinformation increases the perceived accuracy of content[22]. We can assume that users’ would have to rely on their memory to assess the accuracy of newsheadlines or articles. On the other hand, our work suggests that bias and credibility of sources mightbe a different dimension from the accuracy of content. Our comment analysis highlighted that usersoften rely on more analytical approaches and different kinds of heuristics such as negativity, wordusage, or emotions in facial expressions to judge bias and credibility of sources. An explanation forthe effect of familiarity on users’ judgement about bias and credibility could be that when users aremore familiar with politician, they could be more sensitive to the details of texts and images ofcontent they view and therefore possibly less trusting of the source covering that person.Finally, we observed a small overall effect of favorability on users’ perception of tweet bias, andwe did not observe an interaction effect between favorability and emotions in images. Assumingthat users are engaged in deliberate reasoning by detecting cues that point towards sources beingbiased and not credible, these results do not provide evidence that users are engaged in motivatedreasoning based on their favorability towards politicians. This result might yet be another evidencein the line of work pointing that motivated reasoning is not a primary factor in users’ interactionwith misinformation [23]. Moreover, We suspect that this interaction effect might be stronger ifusers are more invested in the topics covered in the study. In the future, we plan to repeat thisstudy with identities and events that are more polarizing.
Across two consecutive preregistered studies with a total of 207 participants, we find evidence thatangry facial expressions in images accompanying social media news posts lead to an increase inusers’ perception of bias in content. We also found that users rate sources that show a systematicangry portrayal of different politicians as less credible. These findings provide evidence on the , Vol. 1, No. 1, Article . Publication date: March 2018. impact of emotional facial expressions on users’ perceptions of source credibility and content bias.These results help paint a more detailed landscape of how users’ trust in news sources are shapedby visual information such as images or videos.Study 1, showed that angry facial emotions increases users’ bias rating of content and decreasescredibility rating of sources. We also found that political orientation of sources reduced the effectof angry emotions. Our qualitative analysis highlighted a wide range of heuristics that relate toboth text and visual information employed by users. Users’ perception of how opinionated a sourceis, choices of unusual or highly negative words, and “unflattering” portrayal of individuals in theirimages are among these heuristics. The combination of our qualitative studies and experimentalresults showed the impact of negative emotions on users’ judgements, but also highlighted thecomplex interaction between topics covered by sources through a combination of text and imagesthat was not explicitly considered in the study 1 design. In the design of study 2, we aimed toget a clearer picture of the impact of angry facial emotions by limiting users’ choices to tweetsmentioning specific politicians. In other words, through study 2, we investigated the systematicnegative or positive visual treatment of politicians on users perceptions of credibility and bias. Ourresults show strong evidence towards part of our hypothesis, that a systematic negative treatment ofpoliticians would lead to a decrease in users’ rating of source credibility and content bias. However,we did not find evidence for our hypotheses around the interactions between users’ favorabilityand emotional facial expressions.Although these result provide clear evidence towards the impact of emotional facial expressionsin images on users’ judgements, there are many more aspects that remain to be studied. First,within each of angry or happy emotional categories, there are several finer levels of facial emotionsranging from extremely angry/happy to subtle frown/grin that might impact users’ judgements.There are also other emotion dimensions such as sadness, surprise, fear, or disgust that mightpotentially impact users judgements within this context. Finally, facial expressions rarely containone unique emotion and subtle changes might communicate different meanings to individuals. Webelieve a natural next step for this study is to control for the amount and type of emotion in facialexpressions by using Generative Adversarial Neural networks to produce image datasets with finercontrol on the facial expressions in images.We also acknowledge some general limitations in the design and execution of our studies. First,the tweets used for both studies are approximately four years old and do not reflect current politicalevents. Users might be more invested and impacted by current political events and make differentdecisions in light of more relevant news. Furthermore, the selected tweets for study 2 were selectedfrom multiple mainstream sources instead of one source. A more coherent dataset from one sourcemight yield clearer results. Finally, in order to limit the impact of users’ preconceived notions ofsources, we masked all account names from our studies. Even though we believe that a scenarioin which users encounter new and unknown sources is realistic and especially important in thecontext of misinformation, in many cases users might have a self-selected set of sources that theytrust and refer to on social media. An important future step for our research is to investigate howusers update their trust in sources they already know based on new content with different positiveand negative emotional images. Such a study is significant in that it can open new ways of reducingtrust in misinformation sources by identifying and highlighting content with highly emotional textand images.Another important factor in understanding how news content impacts users’ attitudes towardssources is their uncertainty around their decisions [32]. Through both studies, using a new elici-tation technique, we asked users to provide their uncertainty ranges around their decisions (Seemethodology section) and explored the impacts of our experimental conditions on users’ uncertaintyranges. For source credibility judgements, we found that some conditions significantly reduced , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 21 users’ uncertainty. In study one, only 𝑠𝑜𝑢𝑟𝑐𝑒 _ 𝑡𝑦𝑝𝑒 = 𝑚𝑖𝑠𝑖𝑛𝑓 𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛 showed significant reductionsin users’ uncertainty. One possible interpretation for this effect could be the inherent differencesbetween misinformation and mainstream sources where misinformation sources use more extremeand suspicious language and images [13, 35, 36]. In study 2, we observed that familiarity reducedusers’ uncertainties. It is possible that in light of more familiarity with persons, users might be morelikely to detect invalid / suspicious content and therefore make decisions with more confidence.Results from our models on users’ uncertainty around their bias choices were less clear andinterpretable. One possible reason could be that users’ have much less information to inform theirchoices about bias for a single tweet, or that there are many different cues that influence theirjudgements. It is also important to note that several factors might impact users’ uncertainty suchas lack of knowledge, lack of clarity, lack of familiarity, or lack of correctness [24]. It is importantto empirically clarify the meaning of our graphical elicitation uncertainty ranges for differentjudgements. Future work on uncertainty elicitation needs to address these subtleties in order tomake such results more informative and useful.Although this research was mostly motivated by the prevalence and impact of misinformation onour democracies and societies, we believe that our current globally politicized and extremely segre-gated political ecosystem calls for a more critical and holistic view on the whole media landscape.Although it is important to understand and mitigate users’ trust in sources of misinformation, itis of equal importance to understand why individuals might elect not to trust more mainstreamand generally trustworthy sources of information. Implicit, negative, and biased visual and verbalpropositions of different politicians by mainstream sources might contribute to this lack of trustand lead to a question that remains mostly under-explored: How verbal and visual strategies bymainstream media might contribute to highly politically polarized societies, the likes of which wewitnessed during the 2020 United States Presidential election? REFERENCES [1] Amelia Arsenault and Manuel Castells. 2006. Conquering the minds, conquering Iraq: The social production ofmisinformation in the United States–a case study.
Information, Communication & Society
9, 3 (2006), 284–307.[2] Jonah Berger and Katherine L Milkman. 2012. What makes online content viral?
Journal of marketing research
49, 2(2012), 192–205.[3] Nicholas David Bowman and Elizabeth Cohen. 2020. 17 Mental Shortcuts, Emotion, and Social Rewards: The Challengesof Detecting and Resisting Fake News.
Fake News: Understanding Media and Misinformation in the Digital Age (2020),223.[4] Jason Chuang, Christopher D Manning, and Jeffrey Heer. 2012. Termite: Visualization techniques for assessing textualtopic models. In
Proceedings of the international working conference on advanced visual interfaces . 74–77.[5] Andrzej Cichocki and Anh-Huy Phan. 2009. Fast local algorithms for large scale nonnegative matrix and tensorfactorizations.
IEICE transactions on fundamentals of electronics, communications and computer sciences
92, 3 (2009),708–721.[6] CHE Gilbert and Erric Hutto. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social mediatext. In
Eighth International Conference on Weblogs and Social Media (ICWSM-14). Available at (20/04/16) http://comp.social. gatech. edu/papers/icwsm14. vader. hutto. pdf , Vol. 81. 82.[7] Carl Iver Hovland, Irving Lester Janis, and Harold H Kelley. 1953. Communication and persuasion. (1953).[8] Lori Jones and Richard Nevell. 2016. Plagued by doubt and viral misinformation: the need for evidence-based use ofhistorical disease images.
The Lancet Infectious Diseases
16, 10 (2016), e235–e240.[9] Alireza Karduni. 2019. Human-Misinformation interaction: Understanding the interdisciplinary approach needed tocomputationally combat false information. arXiv preprint arXiv:1903.07136 (2019).[10] Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana Volkova, Dustin Arendt, Samira Shaikh, andWenwen Dou. 2018. Vulnerable to misinformation? Verifi! (2018). Submitted to VAST 2018.[11] Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana Volkova, Dustin L Arendt, Samira Shaikh,and Wenwen Dou. 2019. Vulnerable to misinformation? Verifi!. In
Proceedings of the 24th International Conference onIntelligent User Interfaces . 312–323. , Vol. 1, No. 1, Article . Publication date: March 2018. [12] A. Karduni, D. Markant, R. Wesslen, and W. Dou. 2020. A Bayesian cognition approach for belief updating of correlationjudgement through uncertainty visualizations.
IEEE Transactions on Visualization and Computer Graphics (2020), 1–1.https://doi.org/10.1109/TVCG.2020.3029412[13] Alireza Karduni, Ryan Wesslen, Sashank Santhanam, Isaac Cho, Svitlana Volkova, Dustin Arendt, Samira Shaikh, andWenwen Dou. 2018. Can You Verifi This? Studying Uncertainty and Decision-Making About Misinformation usingVisual Analytics. In
International Conference on Web and Social Media (ICWSM) .[14] Silvia Knobloch, Matthias Hastall, Dolf Zillmann, and Coy Callison. 2003. Imagery effects on the selective reading ofInternet newsmagazines.
Communication Research
30, 1 (2003), 3–29.[15] Katharina Lobinger and Cornelia Brantner. 2015. Likable, funny or ridiculous? A Q-sort study on audience perceptionsof visual portrayals of politicians.
Visual Communication
14, 1 (2015), 15–40.[16] Cameron Martel, Gordon Pennycook, and David G Rand. 2020. Reliance on emotion promotes belief in fake news.
Cognitive research: principles and implications
5, 1 (2020), 1–20.[17] Lena Masch and Oscar W Gabriel. 2020. How Emotional Displays of Political Leaders Shape Citizen Attitudes: TheCase of German Chancellor Angela Merkel.
German Politics
29, 2 (2020), 158–179.[18] David P McCabe and Alan D Castel. 2008. Seeing is believing: The effect of brain images on judgments of scientificreasoning.
Cognition
Journal of OpenSource Software
2, 11 (2017), 205.[20] Paul Messaris and Linus Abraham. 2001. The role of images in framing news stories.
Framing public life: Perspectiveson media and our understanding of the social world (2001), 215–226.[21] Yilang Peng. 2018. Same candidates, different faces: Uncovering media bias in visual portrayals of presidentialcandidates with computer vision.
Journal of Communication
68, 5 (2018), 920–941.[22] Gordon Pennycook, Tyrone Cannon, and David G Rand. 2018. Prior exposure increases perceived accuracy of fakenews. (2018).[23] Gordon Pennycook and David G Rand. 2019. Lazy, not biased: Susceptibility to partisan fake news is better explainedby lack of reasoning than by motivated reasoning.
Cognition
188 (2019), 39–50.[24] John V Petrocelli, Zakary L Tormala, and Derek D Rucker. 2007. Unpacking attitude certainty: Attitude clarity andattitude correctness.
Journal of personality and social psychology
92, 1 (2007), 30.[25] Stephen Porter, Sabrina Bellhouse, Ainslie McDougall, Leanne Ten Brinke, and Kevin Wilson. 2010. A prospectiveinvestigation of the vulnerability of memory for positive and negative emotional scenes to the misinformation effect.
Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement
42, 1 (2010), 55.[26] Julio Reis, Fabrıcio Benevenuto, Pedro OS de Melo, Raquel Prates, Haewoon Kwak, and Jisun An. 2015. Breaking thenews: First impressions matter on online news. arXiv preprint arXiv:1503.07921 (2015).[27] Barry Richards. 2007.
Emotional governance: Politics, media and terror . Springer.[28] Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition andclustering. In
Proceedings of the IEEE conference on computer vision and pattern recognition . 815–823.[29] NJ Schweitzer, Denise A Baker, and Evan F Risko. 2013. Fooled by the brain: Re-examining the influence of neuroimages.
Cognition . IEEE.[31] Zhenhao Shi, An-Li Wang, Lydia F Emery, Kaitlin M Sheerin, and Daniel Romer. 2017. The importance of relevantemotional arousal in the efficacy of pictorial health warnings for cigarettes.
Nicotine & Tobacco Research
19, 6 (2017),750–755.[32] Zakary L Tormala and Richard E Petty. 2004. Source credibility and attitude certainty: A metacognitive analysis ofresistance to persuasion.
Journal of Consumer Psychology
14, 4 (2004), 427–442.[33] Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: exploring the impact of syntheticpolitical video on deception, uncertainty, and trust in news.
Social Media+ Society
6, 1 (2020), 2056305120903408.[34] Madalina Vlasceanu, Jacob Goebel, and Alin Coman. 2020. The Emotion-Induced Belief Amplification Effect. In
Proceedings of the Annual Meeting of the Cognitive Science Society .[35] Svitlana Volkova, Ellyn Ayton, Dustin L Arendt, Zhuanyi Huang, and Brian Hutchinson. 2019. Explaining multimodaldeceptive news prediction models. In
Proceedings of the International AAAI Conference on Web and Social Media , Vol. 13.659–662.[36] Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic modelsto classify suspicious and trusted news posts on twitter. In
Proceedings of the 55th Annual Meeting of the Association forComputational Linguistics (Volume 2: Short Papers) , Vol. 2. 647–653.[37] Laura E Wallace, Duane T Wegener, and Richard E Petty. 2020. When sources honestly provide their biased opinion:Bias as a distinct source perception with independent effects on credibility and persuasion.
Personality and Social , Vol. 1, No. 1, Article . Publication date: March 2018. mages, Emotions, and Credibility: Effect of Emotional Facial Images on Perceptions of News Content Bias and SourceCredibility in Social Media 23
Psychology Bulletin
46, 3 (2020), 439–453.[38] Claire Wardle and Hossein Derakhshan. 2017. Information Disorder: Toward an interdisciplinary framework forresearch and policymaking.
Council of Europe report, DGI (2017)