Communication about the unreviewed and preliminary nature of COVID-19-related preprint studies remains widely inconsistent among online media outlets, according to a new study published in the journal Health Communication.
Researchers looked for specific keywords in their analysis among digital news outlets including The New York Times, Medscape and Business Insider and found that while a little over half used at least one phrase to indicate the study was a preprint or that the findings were unreviewed or preliminary, the rest didn’t make that distinction.
“[The pandemic] has shown us how important accurate and engaging coverage of science can be, but it’s also shown us how challenging it can be,” says Alice Fleerackers, a researcher at Simon Fraser University’s Scholarly Communications Lab and a coauthor of the new study. “In this paper, we’re encouraging people to start having those conversations and come up with best practices that are going to help journalists cover this research responsibly.”
Preprints are research papers that have not yet been reviewed and evaluated by independent researchers and published in an academic journal. The number of preprints posted to online servers such as medRxiv (pronounced “med archive”) and bioRxiv — servers created to allow biomedical researchers to quickly share their work — have exploded since the beginning of the coronavirus pandemic.
Preprints are an important part of open scientific discourse and the advancement of research, especially during a global health crisis. To make clear that their findings are preliminary it’s important for journalists to identify them accurately.
Dr. Anthony Fauci, chief medical adviser to President Joe Biden, took the time to explain what preprints are after referring to recent research during a White House press briefing on Jan. 21.
“We have this new phenomenon … where people get data, and they put it into a preprint server where it hasn’t yet been peer reviewed, but you have to pay attention to it because it gives you good information quickly,” he said.
Fleerackers had already been looking at how online news stories portray uncertainty around science when the pandemic hit. She decided to turn her attention to the use of preprints.
She and her coauthors explore how journalists at more than a dozen digital media outlets cover preprints in the study “Communicating Scientific Uncertainty in an Age of COVID-19: An Investigation into the Use of Preprints by Digital Media Outlets.”
For their analysis, they selected 2,500 preprints that were COVID-19-related and posted between Jan. 1 and April 30, 2020 on medRxiv and bioRxiv. They then searched for mentions of those studies in a database of online stories. Out of more than 10,000 mentions, they narrowed down their analysis to 457 stories in 15 English-language outlets. The news outlets were selected and analyzed not because of readership or public recognition, but because they incorporated the greatest number of COVID-19-related preprints into their coverage.
Some of the publications mostly covered health and medicine, others were general news outlets with print products, and some were platforms for self-published articles. The outlets included Business Insider, Dailyhunt, Foreign Affairs New Zealand, Inverse, MedicalXpress, Medium, Medscape, MSN, The New York Times, The Conversation, The Guardian, News Medical, National Interest, Wired and Yahoo! News.
They looked for certain phrases that identified the preprint, such as the term “preprint” or phrases indicating the study was unreviewed or preliminary or needed verification.
“It was surprising that [for] some of the more high profile outlets like The New York Times or The Conversation — which is a stage where academics write about their own research — in less than half of their stories, they were mentioning the unreviewed nature of the research,” says Fleerackers.
Meanwhile, Medscape and Wired, which have a stronger focus on science and medicine, consistently mentioned the preprint nature of the studies, the analysis shows.
Fleerackers and her colleagues also provide several examples of how different publications accurately explain preprints in their stories. For example:
“The paper, which has not yet undergone peer review, appeared on Medrxiv preprint server,” says a January 2020 Wired article by Megan Molteni, “Scientists Predict Wuhan Outbreak Will Get Much Worse.” It highlights research in the early days of the pandemic.
“The study, however, published on a preprint server, medRxiv, where, as Medscape readers know, researchers publish early versions of a manuscript before they are peer-reviewed,” says a March Medscape article by Donavyn Coffey and Ivan Oransky. It explains why the publication chose not to cover some of the studies that other, more mainstream publications chose to cover.
“The research was posted on MedRxiv, a website where scientists have been posting articles submitted for publication elsewhere that have not yet been through peer review,” says an April New York Times story by James Gorman, “Disposable N95 Masks Can Be Decontaminated, Researchers Confirm.”
The researchers also looked to see how often stories linked back to the preprint. While they found that more than 90% of stories linked back to the original study, the practice varied by outlet. Some mentioned the research and linked to the preprint without indicating that the cited research was from a preprint study.
One outlet, Dailyhunt, an Indian content and news aggregator, stood out for the smallest number of links to original research. One-third of the outlet’s stories about preprints offer links to the studies.
Fleerackers says not providing links to the original papers puts readers at a disadvantage.
“To me, that’s an important thing to consider,” she says. “Even if you’re not using your words to indicate where your piece of evidence is coming from, I think the least that can be done is to provide a link or some sort of identifying detail that helps people to be able to do their own fact-checking, if they want to.”
Also, it’s important to note that not all preprints are equal in terms of quality or level of completion.
“Preprints are kind of a Wild West,” says Fleerackers. “Some are basically the exact manuscript that they just submitted to a journal,” while others are less robust. Not all preprint studies have been or will be submitted to a journal for publication.
We asked Fleerackers how reporters can best cover preprints and here are her tips. The interview has been edited for length and clarity.
Journalist’s Resource: What’s the best way to describe a preprint study?
Fleerackers: “You can say ‘articles that have not yet been through peer review.’ That’s kind of an acceptable definition, and I think it’s pretty easy to understand what that means. Or ‘preprints are studies that are publicly available that haven’t yet gone through peer review.’”
If your audience doesn’t know what peer review is, you want to describe briefly that it’s a process where outside experts evaluate the research.”
JR: How can journalists assess the quality of preprints?
Fleerackers: “Look at things like the sample size and who’s funding this study. Is that being disclosed in the preprint? If you’re looking at their literature review, are they building on what seems like a pretty strong body of evidence and do their results align with what was done before? If it’s totally novel, that might mean they’ve come across something cool. But it might also mean that there’s something kind of faulty going on with the study.”
Fleerackers also advises seeking outside experts to review the research for you. “Get them to let you know what they think and whether they think you should cover it,” she says.
She adds, “For me, one of the other important questions that maybe hasn’t been addressed elsewhere is to really ask yourself, are the benefits of reporting on these results more important than any potential risk of evidence turning out to be incorrect?”
JR: What are some of the questions reporters should ask the authors of preprint studies?
Fleerackers: “Ask them about the limitations. Scientists as supposed to report [that] in the paper itself and so most people should be open to talking about it. I think another interesting question when covering preprints specifically is, why did you choose to publish this as a preprint and promote [it] as a preprint, rather than [waiting for] a peer review? And people will probably have some very different reasons, and that I think will help you, as a journalist, also evaluate whether those reasons are worth it to you and to your audience.”
Another good question, she says: “Do your findings align with those found in previous studies and who would be a good person to evaluate the quality of this work?”
“A journalist asked me a very good question about this paper the other day, and she says, ‘What would be your biggest fear in terms of a news story about this work?’ And I thought that that was a great question because it’s sort of asking the scientists to consider what would be a really bad outcome of this being covered or this being covered in a not totally accurate way.”
JR: What’s your biggest fear about the coverage of your study?
Fleerackers: “My biggest fear is that people are going to start thinking of preprints as an inherently bad thing, which they really aren’t. They have major benefits for science and I think they could have major benefits for the society, as long as they’re being accurately portrayed.”
It’s worth noting that the study focuses on news coverage during the first few months of the COVID-19 pandemic, when newsrooms were scrambling to cover a story that overtook many beats.
As Dr. John Inglis, co-founder of bioRxiv and medRxiv, noted in a recent e-mail to JR, the study, while valuable, doesn’t necessarily reflect the subsequent learning curve.
“The first preprints on SARS-CoV-2 didn’t appear on bioRxiv until January 15, and the peak of posting for pandemic-related preprints on bioRxiv and medRxiv was in May and June,” Inglis explained. “So the period of study was only the first 3.5 months of the pandemic and during this time, journalists — particularly those drafted from beats other than science — were still coming to terms with what preprints and preprint servers are.”
He said he has observed an improvement in recent months, adding, “Even reporters not steeped in the scholarly publishing process have taken on board the distinctions between a preprint and a paper published in a journal and their copy points it out.”
The study has several limitations, which the researchers note in their paper. It focuses on the communication of uncertainty specific to the COVID-19 preprint and not in the story in general. Researchers also focused on mentions of the 100 most cited COVID-19-related preprints, excluding the coverage of less popular ones.
The study was funded by the Social Sciences and Humanities Research Council of Canada.
This piece was updated on Jan. 28 to include input from Dr. John Inglis.
For more on covering preprints, see the Journalist’s Resource tip sheet “Covering Biomedical Research Preprints Amid the Coronavirus: 6 Things to Know.”
Here’s a good guide for spotting bad science by the University of California, Davis.
For additional tips, see the Journalist’s Resource article “How to Tell Good Research From Flawed Research: 13 Questions Journalists Should Ask.”
You can follow @preprintsifter, a tool that tracks down tweets from leading epidemiologists and other experts who are posting, discussing and informally reviewing COVID-19-related preprint papers on Twitter.