Many Americans share fake news on social media because they’re simply not paying attention to whether the content is accurate — not necessarily because they can’t tell real from made-up news, a new study in Nature suggests.
Lack of attention was the driving factor behind 51.2% of misinformation sharing among social media users who participated in an experiment conducted by a group of researchers from MIT, the University of Regina in Canada, University of Exeter Business School in the United Kingdom and Center for Research and Teaching in Economics in Mexico. The results of a second, related experiment indicate a simple intervention — prompting social media users to think about news accuracy before posting and interacting with content — might help limit the spread of online misinformation.
“It seems that the social media context may distract people from accuracy,” study coauthor Gordon Pennycook, an assistant professor of behavioral science at the University of Regina, told The Journalist’s Resource in an email interview. “People are often capable of distinguishing between true and false news content, but fail to even consider whether content is accurate before they share it on social media.”
Pennycook and his colleagues conducted seven behavioral science and survey experiments as part of their study, “Shifting Attention to Accuracy Can Reduce Misinformation Online,” published Wednesday. Some experiments focused on Facebook and others focused on Twitter.
The researchers recruited participants for most of the experiments through Amazon’s Mechanical Turk, an online crowdsourcing marketplace that many academics use. For one experiment, they selected Twitter users who previously had shared links to two well-known, right-leaning websites that professional fact-checkers consistently rate as untrustworthy — Breitbart.com and Infowars.com. The sample size for each experiment varies from 401 U.S. adults for the smallest to 5,379 for the largest.
For several experiments, researchers asked participants to review the basic elements of news stories — headlines, the first sentences and accompanying images. Half the stories represented actual news coverage while the other half contained fabricated information. Half the content was favorable to Republicans and half was favorable to Democrats. Participants were randomly assigned to either judge the accuracy of headlines or determine whether they would share them online.
For the final experiment, researchers sent private messages to 5,379 Twitter users who previously had shared content from Breitbart and Infowars. The messages asked those individuals to rate the veracity of one news headline about a topic unrelated to politics. Researchers then monitored the content those participants shared over the next 24 hours.
The experiments reveal a host of insights on why people share misinformation on social media:
- One-third — 33.1% — of participants’ decisions to share false headlines were because they didn’t realize they were inaccurate.
- More than half of participants’ decisions to share false headlines — 51.2% — were because of inattention.
- Participants reported valuing accuracy over partisanship — a finding that challenges the idea that people share misinformation to benefit their political party or harm the opposing party. Nearly 60% of participants who completed a survey said it’s “extremely important” that the content they share on social media is accurate. About 25% said it’s “very important.”
- Partisanship was a driving factor behind 15.8% of decisions to share false headlines on social media.
- Social media platform design could contribute to misinformation sharing. “Our results suggest that the current design of social media platforms — in which users scroll quickly through a mix of serious news and emotionally engaging content, and receive instantaneous quantified social feedback on their sharing — may discourage people from reflecting on accuracy,” the authors write in their paper.
- Twitter users who previously shared content from Breitbart and Infowars were less likely to share misinformation after receiving private messages asking them for their opinion of the accuracy of a news headline. During the 24 hours after receiving the messages, these Twitter users were 2.8 times more likely to share a link to a mainstream news outlet than a link to a fake news or hyper-partisan website.
Pennycook and his colleagues note that the Twitter intervention — sending private messages — seemed particularly effective among people with a larger number of Twitter followers. Pennycook told JR that’s likely because Twitter accounts with more followers are more influential within their networks.
“The downstream effect of improving the quality of news sharing increases with the influence of the user who is making better choices,” he explained. “It may be that the effect is as effective (if not more so) for users with more followers because the importance of ‘I better make sure this is true’ is literally greater for those with more followers.”
Pennycook said social media platforms could encourage the sharing of higher-quality content — and re-orient people back to truth — by nudging users to pay more attention to accuracy.
Platforms, the authors point out, “could periodically ask users to rate the accuracy of randomly selected headlines, thus reminding them about accuracy in a subtle way that should avoid reactance (and simultaneously generating useful crowd ratings that can help identify misinformation.”
The researchers received funding for their study from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Omidyar Network, the John Templeton Foundation, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada.
Expert Commentary