Misinformation on social media was bad before the coronavirus pandemic, and it’s only gotten more harmful in the age of social distancing now that lives are on the line and millions of Americans are getting their daily dosage of human interaction and information solely from the virtual world.
Social media companies and traditional news outlets are working on the fly to reduce misinformation about COVID-19, the disease caused by the new coronavirus. Twitter, for example, on March 16 announced that it will require users to remove posts that deny recommendations from global or local public health authorities. Also on March 16, Facebook, Google, LinkedIn, Reddit, Twitter and YouTube put out a broadly worded joint statement on how those platforms are working together and “elevating authoritative content” to combat coronavirus misinformation.
On the fact-checking front, USA Today has joined Facebook’s Third-Party Fact Checking Program, through which certified fact checkers flag misinformation on the platform. False posts come with a warning and show up lower in news feeds. Reuters, the Associated Press, Politifact and other outlets are also part of the program. Snopes, a myth-busting website, withdrew from the program last February after, “evaluating the ramifications and costs of providing third-party fact-checking services,” the site explained in a blog post.
To get a sense of the vast ecosystem of misinformation that social media users are up against — and what journalists can do to avoid being part of the misinformation problem — I called my friend David Rand, who happens to study misinformation as an associate professor of management science and brain and cognitive sciences at the Massachusetts Institute of Technology. His work has been published in the Harvard Kennedy School Misinformation Review, among many other places.
In a new working paper, Rand and his co-authors — including frequent collaborator Gordon Pennycock at the University of Regina in Saskatchewan — apply their framework for studying political misinformation to misinformation about the coronavirus pandemic. The authors conducted two nationally representative surveys with about 850 participants apiece. Rand posted on Twitter an easily digestible roundup of the new findings.
A working paper, as the phrase implies, is a work in progress, compared with peer-reviewed research, which is considered complete and published in a journal after a panel of peer editors have reviewed and approved it. Because news on the coronavirus pandemic is changing by the day, if not the hour, much of the current research on COVID-19 is in working paper form.
Here are some takeaways from my conversation with Rand:
- Headlines matter — because on social media, the headline is often the only thing people see.
- Professional fact checkers simply can’t keep up with the sheer volume of misinformation on social media, and social media users themselves may be better suited to the task of flagging false posts.
- Journalists can avoid amplifying false claims by focusing on the truth rather than describing misleading claims in detail.
The Q&A that follows has been lightly edited for clarity.
Clark Merrefield: What can journalists do to avoid being part of the misinformation problem about coronavirus, and more generally?
David Rand: It’s hard. It’s a hard question. One huge question we’ve gotten really interested in and are starting a project on, and we don’t have results yet, is on amplification. Which is to say, in the political sphere when fringe outlets produce crazy-ass fake news stories and then the New York Times writes about it and says, “Look at this crazy-ass fake news story,” all of a sudden way more people read it. And not just more people, different people. If before it was isolated in the right wing media ecosystem, when the Times writes about it a huge swath of centrist and left-leaning people read it. We have research showing you forget the correction faster than the basic false statement. So it is a real challenge.
How do you cover misinformation? One of the prescriptions is don’t say, “This is a crazy statement and it’s wrong,” but say, “Contrary to what some people are saying, this is actually the correct statement.” That doesn’t work in every instance. But you can say, “Vitamin C does not help protect you from coronavirus.” Still, there’s a problem because how widespread does something need to get before countering it and then giving it a new platform?
Another thing journalists should think about in the age of social media is that the vast majority of people only read the headline. And there is a baseline tendency to make headlines that are more sensational and less accurate than the content of the article, to get people to click in. That is a huge problem in the era of social media. It’s more important than ever because news outlets are desperate for clicks, and it’s also more problematic than ever because people aren’t reading the articles. This is one thing we’ve been trying to suggest to Facebook, which is if you do enforcement based on the headline and downrank articles that have crazy-seeming headlines, that will put pressure on outlets to curtail the crazy headline-reasonable article trend.
CM: The other day I was scrolling through Facebook and someone had posted a New York Times article with an angry-face emoji, and the headline that showed in the feed was “Pelosi Begins Drive to Block Trump’s Emergency Declaration.” Which, in the context of the world being turned upside down as we speak, one might reasonably think Nancy Pelosi was trying to block Donald Trump’s national emergency declaration related to COVID-19. Anyway, that’s what I thought, and I was surprised. But I clicked the link and the real headline was “Pelosi Begins Drive to Block Trump’s Border Wall Declaration,” and it’s from February. So it’s a true and accurate story, but it’s about the House speaker trying to throw a wrench into the president’s border wall idea, and it’s completely out of context.
DR: I know that platforms have been thinking about this, more for images than for news articles, but the same thing could be done for news articles, where if the image is more than a year old, or whatever the criteria, they could flag them and say, “This image is a year old.” That happens a lot in certain aspects of misinformation, people post old things and claim them as current events. Something that could easily be done is to make the original posting date more obvious.
CM: Your new paper looks at misinformation spreading on social media about COVID-19. What questions did you and your collaborators ask and what did you find?
DR: We were interested in misinformation on social media for a while — what you can do to reduce the spread of misinformation — and we mostly have been focusing on political misinformation, fake news, hyper-partisan news, stuff like that. And then with the COVID-19 pandemic that was obviously very much on our minds and it seemed misinformation was one part of the COVID-19 problem. We wondered whether the same things we found to be true for political misinformation would also be true for COVID-19 misinformation.
We did a couple of experiments. The basic structure is they’re survey experiments where we recruit people that are reasonably representative, so they match the U.S. distribution of age, gender, ethnicity and geographic region. And we show them a series of social media posts, like Facebook posts. They’re all real Facebook posts, they’re all things that were actually posted on social media, and they’re all about COVID-19. Half of them are true and half of them are false.
In the first experiment we asked half of the people, “Do you think this is accurate or not?” And then, “Would you consider sharing this on social media or not?” What we found in the political context, and what we wanted to test here, was whether there is a disconnect between accuracy and sharing. The pattern we find is that people are pretty good when you ask them, “Is it accurate or not?” They rate the true stories as way more accurate than the false stories. If instead you ask, “Would you share this on social media or not?” they’re way less discerning. And there’s essentially no difference in their likelihood to share. They’re pretty much as likely to share the false posts as the true ones. Fifty percent more people said they would share the headlines than said they were true. There is a good chunk of cases if I ask, “Is this true?” you would say no. But if I asked, “Would you share it?” you would say yes.
CM: What’s going on? What’s the mechanism at work where people are sharing information they perhaps know to be false?
DR: The first explanation you might think is that it’s a post-truth world and people are perfectly happy to share things that are false. We don’t think that’s what’s going on. People care about accuracy in the abstract, but this social media context focuses their attention on other things. Like, how will my friends and followers react? What does this post or share say about me as a person? People forget to think about accuracy, even though they would take accuracy into account if they thought about it.
To support this interpretation and to suggest an intervention we do a second experiment where everyone gets shown a set of headlines and gets asked, “Would you share it or not?” That’s the control condition. In the treatment, at the beginning of the study before they start rating the articles, before they start saying whether they would share the article, we say, “We want you to test an item for another survey we’re developing — just rate the accuracy of this one random headline, which is not about COVID-19, just some random headline.” And we say, “OK, great, thanks, go on to the main task,” where we show them the COVID-19 headlines.
By asking them to rate the accuracy of the random headline it makes the concept of accuracy more top-of-mind for people. And that’s what we find. In the control, people are equally likely to share true or false headlines. In the treatment, they’re significantly more likely to share true headlines than false headlines. This is exactly the same thing we found for political headlines in our other study. And so that suggests this is a general principle than something specific about political misinformation.
CM: What role do social media platforms play in preventing the spread of misinformation about COVID-19, and generally?
DR: Social media platforms are maybe, if not literally the only ones who can do anything, they’re certainly the ones in a way better position to do anything about misinformation. They have all the information and complete control over the design elements. People outside these companies don’t know how the platforms actually work, so it’s hard to come up with regulations that are effective. So we talk to people at platforms and talk them into doing things we think makes sense. Or, in the best case, help them design interventions.
The simple intervention social media companies could do is to nudge people to think about accuracy. Like, while you’re scrolling through a newsfeed there would be a card that pops up saying, “Help us inform our algorithms and rate this headline. Do you think it’s accurate or not?” That would make users think about accuracy, plus the findings would be useful. A small crowd of laypeople is pretty good at identifying false or misleading content.
CM: Do you have any partnerships with platforms under way?
DR: [My research group] has one partnership with Facebook around the crowdsourcing approach. We’re working with them on figuring out how to use crowdsourcing to identify misinformation in a scalable way. The problem with professional fact checkers is there’s not nearly enough of them to deal with the amount of misinformation being created. We have another paper showing that problem, because lots of content doesn’t get checked by fact checkers.
Also, there is the “implied truth effect.” Say a fact checker flags something they think is false and the post gets a warning. The absence of a warning also implies that it may have been checked and verified. That’s why coming up with scalable fact checking approaches is really important, which is where the crowd can come in.
The HKS Misinformation Review is fast-tracking a special issue on COVID-19. Check out the rest of our coverage of how coronavirus is upending the U.S. economy and education sector.
Expert Commentary