Expert Commentary

Covering biomedical research preprints amid the coronavirus: 6 things to know

Journalists need to know these six things to cover coronavirus-related preprints, research papers that haven't been peer reviewed by experts.

preprints medical research papers coronavirus
(Pixabay/OpenClipart-Vectors)

As a new coronavirus spreads across continents, numerous biomedical researchers have turned their focus to the pandemic and its impacts. Online publishing platforms are helping them share what they’ve learned quickly so medical professionals, government leaders and others can respond more quickly to prevent, treat and control infections.

Preprint servers allow researchers worldwide to post their findings so that anyone anywhere can access, in one place, thousands of new academic papers on health topics such as the coronavirus and COVID-19, the disease it causes.

Preprints are research papers that have not been published in an academic journal. They also have not undergone peer review, meaning that independent experts have not analyzed and critiqued the paper. Surviving peer review is no guarantee that a piece of research is high quality, but the process is designed for quality control.

While making preprints available to the public has many benefits for researchers, the scientific community has expressed concerns about journalists misinterpreting findings and ignoring or excluding context that is critical to understanding a research study’s preliminary results. They also worry that journalists who are not trained to spot methodological flaws and misleading claims — issues experts would catch during peer review — will base some news coverage on problematic findings.

Such errors and shortcomings are particularly problematic during a deadly pandemic, when the public relies on news outlets for accurate, up-to-date information, including guidance on staying healthy and safe.

As a deluge of new coronavirus-related research floods preprint servers such as the health sciences server medRxiv and the biological sciences server bioRxiv, we reached out to scholars for help explaining how journalists should use preprints.  We also asked for pointers on how journalists can avoid mistakes.

Below, we highlight six key things reporters need to know about preprints, based on interviews with two people with vast experience in biomedical research: Bill Hanage, an associate professor of epidemiology at the Harvard T.H. Chan School of Public Health, and John R. Inglis, who has launched and managed multiple academic journals and, as executive director of the nonprofit Cold Spring Harbor Laboratory Press, co-founded medRxiv and bioRxiv.

  1. Servers for preprints in other academic subjects — physics, math and the social sciences, for example — have existed for decades. It has taken much longer for researchers studying life and health sciences to become comfortable with the idea of sharing knowledge this way, partly because patients could be hurt if doctors alter treatments or patients treat themselves based on preprint results.

medRxiv (pronounced med-archive) began accepting papers in mid-2019. bioRxiv (pronounced bio-archive) was launched in 2013. Inglis says the two preprint servers now host hundreds of coronavirus-related papers, most of which have been posted since late January.

Preprints have prompted an awareness and evaluation of new research “in a way that hasn’t happened before in a viral outbreak,” he explains. He says preprints in the biological sciences were tough to find during the 2003 outbreak of severe acute respiratory syndrome and the 2012 outbreak of the Middle East Respiratory Syndrome.

More than 90% of journal articles examining the epidemiology of SARS weren’t published until after the epidemic ended in 2003, according to a study that appeared in PLOS Medicine in 2018.

  1. Preprints generally undergo a basic screening. But they are not peer reviewed.

Most preprint servers are not-for-profit projects run by academics or academic institutions. Their main aim is simple: to help researchers disseminate their work quickly, because it can take weeks to years for a paper to be peer reviewed and published in an academic journal.

Preprint screenings are generally cursory, completed within a few days. To be posted on the preprint servers medRxiv and bioRxiv, a paper must pass a basic screening that looks for plagiarism, offensive content, non-scientific content and material that could pose a health risk. Screeners do not make judgements about a paper’s methods, conclusions or quality.

Hanage says preprint screenings help guard against “obviously dodgy stuff.” He and Inglis warn that journalists should examine preprints with caution.

According to Inglis, “anyone reading a preprint is seeing how its authors describe what they did and how they interpret the results. But peer review may do many things to the authors’ account: catch errors, argue with interpretation, dismantle or scale back claims, cut bits out, request more experiments and data, and more.”

  1. Preprint findings are preliminary and, sometimes, are strictly theoretical. Journalists should make that clear in their coverage.

Journalists should avoid characterizing preprint findings as established facts. Hanage points to a recent news article about a preprint examining the coronavirus in the United Kingdom as an example of what not to do. The preprint describes an epidemiological model developed by researchers at the University of Oxford.

“This is a model that seeks to examine what the pandemic would look like in the UK if we assumed that there have been a very large number of mild and undetected infections, such that population immunity already exists to some extent,” Hanage explains, adding that the news outlet’s coverage “suggested that these were the conclusions, rather than theoretical possibilities given some suitably tortured data.”

A misinformed public is the result of inaccurate reporting on preprints. Hanage says, “There is at present absolutely zero evidence for population-level immunity, and there will be none until a good serological survey is complete.”

To make it clear that preprint findings are preliminary, Inglis says journalists should explicitly point it out in their stories. “They must make that statement, of course, but it would help even more to have solicited and quoted the opinion of at least one independent expert and include any caveats they may have,” he says.

  1. Knowledge of research methods is important for gauging whether a preprint is worth covering.

Hanage says journalists who lack this expertise should not cover preprint research on health and biological topics such as infectious disease pandemics and COVID-19.

“Don’t go near it if you are not a science journalist,” he warns. “Avoid the practice of ‘churnalism,’ where you end up reporting on other news stories and controversies rather than hard fact. Ask whether the conclusions [of a research paper] are contrary to everything else that we think we know — and remember that anything very surprising is usually wrong.”

Journalists with knowledge of science, medicine and research methods are better able to spot flaws in preprints on health and life science topics. They also understand how drastically a preprint’s findings might change as a result of the peer-review process.

“It helps if journalists are aware of how the scientific process and peer review work and remember that what they find so appealing in a preprint may not survive community scrutiny,” Inglis explains. He says journalists can get a sense of how a preprint is immediately received by other researchers by reading comments left on the preprint and their responses on Twitter.

  1. Consulting researchers who were not involved with a study but have expertise on the study topic can help journalists gauge whether a preprint is newsworthy and how coverage should be framed.

When evaluating a preprint, Hanage suggests journalists find out whether its authors are reputable and have previously done high-quality research.

“If so,” he says, “then you can seek out a little informal peer review. Call other scientists in the same field and ask if they rate the work in the preprint as credible. If so, report it, emphasizing the preliminary nature of the findings.”

Inglis notes that it likely will be more difficult for journalists to gauge the reputations of some COVID-19 researchers. Many of those who have posted to medRxiv and bioRxiv are Chinese scientists who were on the front lines in the first months of the outbreak, he says. “Most have never posted to a preprint server before and are not well known in the West,” he says.

Inglis recommends reporters make a habit of conferring with researchers. “To avoid being drawn into reporting bad information,” he says, “a journalist should have a group of experts she can go to for advice, while remembering that even experts aren’t experts in everything: an epidemiologist and a molecular biologist may study the same virus but not be able to professionally evaluate each other’s work.”

  1. Preprints occasionally are withdrawn from the server by their authors.

A key benefit of posting preprints online is that it allows authors to receive rapid feedback via posts to the web server and social media. “The entire community of scientists can judge new work for themselves and respond appropriately — critiquing it, testing it themselves, extending it, et cetera,” Inglis says.

Sometimes, the feedback makes it clear that researchers have more work to do. Nearly 80,000 papers have been posted to bioRxiv since its launch six years ago, and fewer than 90 have been withdrawn.

“Most were withdrawn by their authors either because they had discovered a reason to doubt the results they had posted or because not all the named authors had agreed to the submission, which is a condition of posting,” Inglis explains. “Since the deluge of COVID-19 papers began, two have been withdrawn [from bioRxiv].”

One of the two papers recently withdrawn claimed to find similarities between the coronavirus and HIV and suggested the coronavirus might have been engineered by humans. The authors realized their study “had analytical flaws when hundreds of scientists posted comments on Twitter and the bioRxiv site saying so,” Inglis says. “The other [paper] was withdrawn because the authors felt that they now had a larger patient dataset from which they might derive different conclusions.”

 

Check out our tip sheet on covering COVID-19 and our other coronavirus-related resources, including collections of research on hoarding and panic buying, how disease outbreaks affect people’s mental health and the effectiveness of online schooling.

You also might find our tip sheet on spotting bias in randomized clinical trials helpful.

About The Author