Expert Commentary

Seen a fake news story recently? You’re more likely to believe it next time

Talk of fake news dominated the 2016 presidential election cycle. New research examines how people fall for such disinformation.

Facebook
(Unsplash)

“Pope Francis Shocks World, Endorses Donald Trump for President”; “ISIS Leader Calls for American Muslim Voters to Support Hillary Clinton.”

These examples of fake news are from the 2016 presidential election campaign. Such highly partisan fabricated stories designed to look like real reporting probably played a bigger role in that bitter election than in any previous American election cycle. The fabrications spread on social media and into traditional news sources in a way that tarnished both major candidates’ characters.

Sometimes the stories intentionally damage a candidate; sometimes the authors are driven only by dollar signs.

Questions about how and why voters across the political spectrum fell for such disinformation have nagged at social scientists since early in the 2016 race. The authors of a new study address these questions with cognitive experiments on familiarity and belief.

An academic study worth reading: “Prior Exposure Increases Perceived Accuracy of Fake News,” a Yale University working paper, 2017.

Study summary: Gordon Pennycook, Tyrone Cannon and David Rand — a psychologist, an economist and a professor of management, all at Yale University — address questions many have asked since the 2016 election: “How is it that so many people came to believe stories that are patently and demonstrably untrue?” “What mechanisms underlie these false beliefs that might be called mass delusions?”

Pennycook and his colleagues designed three studies to test how people absorb fake news. They presented test subjects with an equal number of fake news headlines — fake stories that really appeared on social media — and real story headlines. Then they asked subjects if they would share the headline on social media and if they believed it accurate. The headlines were presented to look like posts on Facebook, sometimes including the names of source publications, and they were balanced politically: an equal number appeared to be pro-Democrat and pro-Republican.

To test if familiarity played a role in the likelihood they would trust the story, the researchers conducted some of the experiments in stages, introducing the stories and then presenting them a second time, sometimes a week later.

The authors also tested a warning label similar to the one Facebook began attaching to suspicious posts after the election — “Disputed by 3rd Party Fact-Checkers” — allowing them to assess the cognitive mechanism behind subjects’ decision-making processes.

To establish a way of testing how political ideology informs decisions about fake news, the authors asked participants to identify their political party and whom they voted for in the 2016 presidential election. Subjects who did not vote for either Donald Trump or Hillary Clinton were asked to choose one “if you absolutely had to.”

Key takeaways:

  • Subjects rate familiar fake news (posts they have seen even only one time before) as more accurate than unfamiliar real news headlines. The perceived accuracy of a headline increases linearly as the number of times a participant is exposed to that headline grows, suggesting “a compounding effect of familiarity across time.”
  • The authors test for causality and find that presenting fake headlines once in an early part of the study (making them familiar) leads almost twice as many subjects to rate these headlines later as more accurate. “The fact that a single exposure to the fake stories doubled the number of credulous participants suggests that [this familiarity effect] may have a substantial impact in daily life, where people can see fake news headlines cycling many times through their social media newsfeeds.”
  • The authors determine that this effect was largely driven by low-level mental processes rather than complex reasoning. That’s because subjects presented only with fake news headlines were more likely to believe the fake headlines they had seen previously (even if those had a warning label attached to them), than they were to believe unfamiliar fake news stories.
  • Thus, the warning labels do not diminish the familiarity effect. Instead, warning labels seem to make participants suspicious of all headlines, not only the headlines they are warned about, and less likely to share fake news on social media. 
  • People are less likely to believe a fake news story when it is inconsistent with their political beliefs. But they are still more likely to believe it if they have seen it before. The authors cite this headline for example: “Sarah Palin Calls to Boycott Mall of America Because Santa Was Always White in the Bible.” On first exposure, Clinton supporters were almost twice as likely (24 percent) to believe this story as Trump supporters (12.6 percent). But on second exposure, a larger share of both groups believed it accurate: 31.3 percent of Clinton supporters and 16.9 percent of Trump supporters.

Analysis:

  • Instead of warning labels, the authors conclude that “larger solutions are needed that prevent people from ever seeing fake news in the first place, rather than qualifiers aimed at making people discount the fake news that they see.”
  • They interpret their findings to suggest that “politicians who continuously repeat false statements will be successful, at least to some extent, in convincing people those statements are in fact true.”
  • And finally, they note that the polarized echo chambers many voters find themselves in today help create “incubation chambers for blatantly false (but highly salient and politicized) fake news stories.”

Other research:

Our 2017 research roundup on fake news offers insights into its impacts and how people use the internet to spread rumors and misinformation.

We also profiled this 2016 study in Computers in Human Behavior, which suggests that regardless of whether they read news posts, people feel informed when they glance at a busy Facebook feed.

“Social Clicks: What and Who Gets Read on Twitter,” a 2016 paper in the Association for Computing Machinery’s journal Sigmetrics, analyzes the effects of 2.8 million shares on social media.

This 2017 working paper from Stanford University finds that fact-checkers often do not agree with one another.

This 2017 paper from the Proceedings of the National Academy of Sciences identifies the conditions in which people are most and least likely to fact-check statements.

Other resources:

Just after the 2016 election, a Pew Research Center study found 64 percent of Americans had been confused by fake news stories; 16 percent had shared fabricated stories by mistake. This 2016 Pew study found 62 percent of American adults get news on social media, up from roughly 49 percent in 2012.

A 2017 conference organized by Northeastern University and the Shorenstein Center discussed combatting fake news. A list of recommendations is available here.

About The Author