Expert Commentary

Covering scientific consensus: What to avoid and how to get it right

Three researchers explain how journalists can use scientific consensus to bolster their coverage and battle misinformation about public policy topics.

scientific consensus news tips

When reporting on controversial policy topics such as vaccine safety and climate change, journalists can look to scientific consensus to bolster their coverage and battle misinformation.

If you’re unsure what scientific consensus is, don’t understand its significance or have no idea how to gauge it, keep reading. This tip sheet features practical advice from three researchers with expertise on those topics.

Scientific consensus is the collective position scientists in a given field have taken, based on their interpretation of the available evidence. For example, the overwhelming majority of doctors say childhood vaccines are safe. Surveys of physicians and medical researchers “have repeatedly indicated that over 90% of doctors agree that adults and children should receive all recommended vaccines,” according to a paper published in 2016 in the medical journal BMJ Evidence-Based Medicine.

Knowing what experts think about an issue can help the public make informed decisions about it. When a reporter interviews a source whose views match the collective position, it’s a strong signal the information is trustworthy, explains Eric Merkley, an assistant professor of political science at the University of Toronto who studies expert consensus.

His research finds that news outlets do a poor job informing the public about the scientific consensus on hot-button issues such as vaccination, nuclear power and genetically modified foods — even when it’s directly relevant to their coverage.

When Merkley analyzed U.S. news coverage of 10 science and economic policy issues on which there’s clear consensus, he discovered “information related to expert consensus is contained in an extremely small portion of a news consumer’s diet on these issues.” He examined nearly 300,000 news articles and transcripts from six national and local newspapers, three cable news networks, three TV news networks and the Associated Press newswire service going back to 1980.

The resulting paper, “Are Experts (News)Worthy? Balance, Conflict, and Mass Media Coverage of Expert Consensus,” was published last year in Political Communication.

Merkley urges journalists to make a habit of pointing out when there’s broad expert agreement on the policy questions they’re covering. Audiences need that information.

“A lot of people only really get information about these issues through the news media, so when the opportunity arises to provide this contextual information, I believe it’s very important to do so,” he says.

This tip sheet aims to help with that. Below, Merkley joins two other researchers — Teresa Myers, an assistant research professor at George Mason University’s Center for Climate Change Communication, and Sara Shipley Hiles, an associate professor at the Missouri School of Journalism and veteran science journalist — to offer advice on how journalists can strengthen their work.

Here’s what to avoid when covering scientific consensus — and how to get it right.

WHAT TO AVOID: Citing individual researchers without noting how their views on a given topic compare with those of other scholars in the field.

Reporters often quote researchers without explaining whether their statements represent the views of researchers in the same field as a whole, Merkley says. Without that context, audiences might not know when an expert’s statement is considered extreme or contradicts consensus.

When expert consensus exists on a subject being covered, Merkley recommends journalists include that in their stories. He also warns that journalists do the public a disservice when they ask researchers to weigh in on topics outside their area of expertise or when they quote researchers providing highly contested points of view.

“We need some more critical interrogation of how expert sources are being used in news coverage,” Merkley says.

HOW TO GET IT RIGHT: Look to peer-reviewed research and scientific organizations for help gauging whether and how much agreement exists among scientists on a topic.

Myers suggests journalists look to these four sources for information about levels of agreement:

  • Studies of scientific agreement — Academic journals occasionally publish papers analyzing existing research to establish the level of consensus on a given issue. Just last month, Environmental Research Letters published the paper, “Greater Than 99% Consensus on Human Caused Climate Change in the Peer-Reviewed Scientific Literature,” which finds “there is no significant scientific debate among experts about whether or not climate change is human-caused. This issue has been comprehensively settled, and the reality of [anthropogenic climate change] is no more in contention among scientists than is plate tectonics or evolution.”
  • Surveys of subject experts — When it’s unclear whether consensus exists, scholars, health care professionals and other experts may be asked to complete a survey to share their views on an issue or research question.

    A paper published in PLoS ONE last year examines the results of a series of surveys asking leading health informaticians how machine learning will influence primary care in the U.S. over the next several years. The main takeaway: The consensus is that machine learning “will engender training and primary care work force changes, improve rates of diagnostic accuracy, and increase access to primary care” within the next decade.


WHAT TO AVOID: Assuming a certain percentage of scientists in a given field need to agree in order to reach scientific consensus on a policy issue or question.

There’s no set threshold for achieving consensus. In other words, journalists shouldn’t make assumptions about how many scientists or what proportion of scientists in the field have come to the same conclusion. Having a consensus, Myers explains, simply means there’s broad agreement among subject experts on an issue or the answer to a policy question.

“There isn’t a very clear, bright-light legal standard,” she says. “If 99% of scientists agree, this is consensus. It can be a supermajority or it can be a majority. Broadly defined, it’s the reasonable level of agreement on a topic.”

HOW TO GET IT RIGHT: Explain what scientific consensus is and why it’s important. Use numbers, when possible, to convey levels of agreement.

Journalists won’t be able to find information on experts’ exact level of agreement on all topics in the hard sciences and social sciences. But scholars tend to track whether consensus exists on controversial topics with significant consequences.

Myers urges journalists to make clear the strength of agreement by pointing out the percentage of researchers in agreement, when possible. Including this number in news coverage not only reflects the reliability of the information, it appears to help correct misconceptions about what experts consider to be true, Myers and her colleagues write in the paper, “Simple Messages Help Set the Record Straight about Scientific Agreement on Human-Caused Climate Change: The Results of Two Experiments,” published in PLoS ONE in 2015.

Myers and her co-authors found that news articles using numbers to communicate the level of scientific consensus were more effective at correcting widespread misunderstandings about the consensus on human-caused climate change than verbal descriptors such as “most” and “the majority.”

Taking time to explain the meaning and value of scientific consensus also helps, according to a September 2021 paper in the journal Psychological Science, “Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs.” That study suggests news articles that describe scientific consensus and report what most scientists have to say about the safety of genetically engineered foods help correct misconceptions about the topic.


WHAT TO AVOID: Giving equal weight to the contrasting views of researchers, politicians, community leaders and others.

Journalists’ tendency to be fair and balanced sets them up for failure when covering science. Giving all sources’ views equal weight — for example, giving a vocal critic with an extreme stance as much space as a scholar who has studied the issue at hand for decades and whose views represent scientific consensus — creates the perception that science is divided or that no one knows what’s true.

“Journalists need to understand that covering science is not like covering politics,” Hiles wrote in an email interview.  “There aren’t ‘two sides’ to a science story.”

Many news outlets, she noted, have gotten it wrong when covering climate change.

“For years, we presented climate change as a two-sided story, often a 50-50 story that quoted some people saying that humans caused climate change and some saying they didn’t,” Hiles wrote. “In reality, the science was already clear: The evidence for anthropogenic [human-caused] climate change was strong.”

HOW TO GET IT RIGHT: Take a “weight-of-evidence” or “weight-of-experts” approach.

Instead of resorting to both-sides-ism, spotlight the evidence and sources whose views match the scientific consensus, recommends Hiles, who’s leading a new journalism initiative called the Mississippi River Basin Ag & Water Desk, which will place 10 reporters in newsrooms in the region to cover agriculture, water and related issues starting next summer.

Hiles learned while studying an elite group of environmental journalists that they regretted having covered climate change as a controversy in prior years. They began advocating a “weight-of-evidence” approach in which “mainstream scientists are the focus of global warming science stories and ‘skeptics’ are given little, if any, ink,” Hiles and a co-author write in a paper published in Science Communication in 2014, “Climate Change in the Newsroom: Journalists’ Evolving Standards of Objectivity When Covering Global Warming.”

Journalists taking a “weight-of-evidence” approach also should emphasize the findings of high-quality research, including studies published in academic journals. Hiles suggested via email that reporters use caution when covering issues that haven’t been well researched or when most of the research on a topic is composed of working papers or preprint papers, neither of which have undergone peer review. During the peer-review process, independent scholars evaluate a researcher’s work, pointing out problems and shortcomings.

Communication scholar Sharon Dunwoody also has spoken out in favor of weight-of-evidence reporting. However, she and a fellow researcher re-labeled the term “weight of experts” a few years ago “to more accurately capture its emphasis on communicating the distribution of expertise rather than evidence per se,” they explain in “Using Weight-of-Experts Messaging to Communicate Accurately About Contested Science,” which appeared in Science Communication in 2017.

That paper describes weight-of-experts narratives as “a straightforward expression of how experts are arrayed in a contested truth situation” whereas a weight-of-evidence frame is “a more comprehensive reflection that includes not only where experts sit on the continuum but also information about the evidence undergirding those judgments.”

A weight-of-experts approach is better at helping people determine the validity of scientific claims, Dunwoody and her colleague assert. They conducted an experiment in which 759 people read and answered questions about different versions of a made-up news story about pharmaceutical pollution. They learned that the story offering a weight-of-experts narrative led participants “to greater certainty about what scientists judged to be true and, importantly, to then use that perception to reach greater certainty about what they personally thought was true.”

ANOTHER WAY TO GET IT RIGHT: Remember that scientific consensus isn’t an ironclad guarantee that what scientists believe now will hold over time.

Journalists need to keep in mind that levels of agreement can change over time, rising or falling as scientists learn more about a topic.

A famous example of how scientists’ views can change: In 2005, two Australian researchers who discovered that the bacterium Helicobacter pylori caused peptic ulcers, received a Nobel Prize in Physiology or Medicine. At the time they revealed their findings, “it was a long-standing belief in medical teaching and practice that stress and lifestyle factors were the major causes of peptic ulcer disease,” Niyaz Ahmed, a prominent epidemiologist, writes in the Annals of Clinical Microbiology and Antimicrobials.

“New information comes along, and that is part of the scientific method,” Hiles explained. “We have to be humble and follow the facts where they lead us. Give as much context as possible and help the audience understand that science is a process, not a destination.”

About The Author