EMBO reports | 2021

Education alone is insufficient to combat online medical misinformation

 
 

Abstract


W e read Emilia Niemiec’s article, COVID-19 and misinformation, with great interest (Niemiec, 2020). While we agree that censorship is an inadequate solution to the “infodemic” of false medical news on social media, we would like to provide additional context regarding the usefulness of either education or censorship as tools to fight misinformation. We also discuss empirically supported solutions to the problem of online misinformation, including accuracy nudges and crowdsourced ratings. Our primary disagreement with Niemiec concerns the particular forms of education she recommends to combat online health misinformation. Several remedies lack strong empirical support, such as teaching social media companies’ business models. Others may produce unintended consequences. For example, teaching about researcher bias and flawed peer-reviewed systems could increase vulnerability to health misinformation by undermining trust in science and scientists (Roozenbeek et al, 2020). Not all educational approaches to reducing misinformation’s impacts are unsupported or potentially misguided. Teaching strategies for spotting misinformation—for instance, checking authors’ sources—improves discernment between real and fake news (Guess et al, 2020). Learning techniques commonly used to peddle misinformation in a game-like environment reduces the perceived reliability of fake news items and improves confidence in correct reliability judgments (Basol et al, 2020). Despite their promise, educational interventions have significant limitations: Chiefly, they require individuals who are motivated to seek and voluntarily engage them. This complicates outreach to populations with lower digital media literacy, such as older individuals, who may be most likely to share fake news. Furthermore, even effective educational interventions published in prominent journals do not eliminate vulnerability to misinformation. For example, after learning strategies to spot misinformation, more than 20% of people still rated fake news “somewhat accurate” or “very accurate” (Guess et al, 2020). A final limitation of educational interventions stems from their focus on the perceived accuracy of misinformation. Perceived accuracy has little impact on information sharing, likely because social media encourages individuals to focus on other factors, such as whether sharing will attract and please followers and friends (Pennycook et al, 2019). Accordingly, educational interventions that improve detection of online health misinformation may not reduce misinformation sharing. Interventions that do not reduce misinformation sharing are therefore incomplete because sharing begets misinformation exposure, which begets increased perceptions of truth. Clearly, education alone is an inadequate solution to the problem of medical misinformation on social media. Reducing harms associated with misinformation requires multipronged, empirically validated approaches, which may include forms of censorship, nudges, and crowdsourcing. Censorship can prevent individuals from being exposed to false and potentially dangerous ideas. Preventing exposure is integral because merely viewing misinformation increases perceptions of truth, as demonstrated in experiments examining the “illusory truth effect”, which extends to fake news, and holds even when information is implausible or contradicts pre-existing knowledge (Fazio et al, 2019). A significant amount of misinformation promoting COVID-19 “cures” and “preventative agents” is clearly false and potentially dangerous. In Iran, misinformation about using ethanol to cure and/or prevent infection, in combination with cultural factors (alcohol being illegal), has precipitated fatal methanol poisonings (Hassanian-Moghaddam et al, 2020). False claims that the COVID-19 vaccine contains a microchip and will alter DNA may encourage COVID-19 vaccine hesitancy (Roozenbeek et al, 2020) and thereby interfere with the establishment of herd immunity. Disabusing individuals of beliefs inspired by this misinformation will be difficult: Individuals often continue to rely upon misinformation even after viewing explicit corrections —a phenomenon known as the “continued influence effect” (Basol et al, 2020). Furthermore, human cognition appears organized to resist belief modification, and humans display cognitive biases, such as confirmation bias, that help to maintain beliefs (Bronstein et al, 2019). Because censorship circumvents exposure to false information and thus intervenes before beliefs become established and subject to these biases, it has immense value in the fight against online health misinformation. To be clear, we advocate for deletion of false and dangerous information; other forms of censorship, like labeling information as disputed, can have unintended consequences, such as causing unlabeled false information to seem more accurate—the “implied truth effect” (Pennycook & Rand, 2019). In cases where censorship is less wellsuited, such as when information’s epistemic

Volume 22
Pages None
DOI 10.15252/embr.202052282
Language English
Journal EMBO reports

Full Text