Expert Commentary

3 steps to determine whether a medical study is newsworthy

With the amount of research published on a daily basis, journalists have to work to discern what’s worth covering. Here's a general guide.

researcher
(Pixabay)

Last week, Journalist’s Resource attended Health Journalism 2019, the annual conference of the Association of Health Care Journalists (AHCJ), in Baltimore, Maryland. One of the sessions we attended, titled “Begin Mastering Medical Studies,” offered pointers for deciding which research is worth covering.

This tip sheet summarizes key points made by during the session by Tara Haelle, an independent health journalist and AHCJ topic leader for medical studies.

SO MANY OPTIONS, SO LITTLE TIME

With the amount of research published on a daily basis, journalists have to work to discern what’s worth covering. We’ve broken the process down into three steps as a general guide.

Step 1: Consider the category of the study.

As a starting point, Haelle suggested considering the category of the study you’re thinking of covering.

Generally, studies testing a medical intervention fall into one of the following categories:

  • Pre-clinical studies: This early phase of research precedes the clinical study phase. The research is not conducted with human subjects, so the findings are limited. There are two different kinds of pre-clinical studies:
    • In vitro: These studies are conducted on cells grown in a lab.
    • In vivo: These studies are conducted on non-human animals.
  • Clinical studies: If research shows promise in the pre-clinical phase, it might move onto clinical studies, which involve humans and examine their responses to the intervention. Clinical studies can take two forms:
    • Epidemiological/observational studies: Observational research, as the name suggests, involves observing ongoing behavior and assessing outcomes over time. These studies are not randomized. Think of a study that looks at the relationship between smoking and developing cancer. It’s a real-world experiment that hinges on long-term observation. Researchers can find correlations between variables but they cannot, on the basis of a single observational study alone, claim causation. That’s because there could be other variables that aren’t being controlled that could explain the outcomes, such as weight, other medical conditions, genetics, or environmental exposure. For these reasons, epidemiological/observational studies stand in contrast to another key type of clinical study: randomized, controlled trials.
    • Randomized, controlled trials: These are studies in which a new intervention is randomly assigned to some participants in a trial and tested against a control group, which receives a standard treatment or a placebo, to determine its effects. These studies can provide evidence of causation. Randomized, controlled trials generally proceed through a number of stages:
      • Phase 0: This phase involves giving human subjects small exposures to the intervention in question. It aims to answer whether the intervention works in humans. “Is it worth moving forward?” is how Haelle summarized this phase of research.
      • Phase I: This phase tests the intervention to make sure it is safely tolerated in humans. The intervention is typically tested in healthy people who do not have the condition the intervention might treat.
      • Phase II: This is a larger trial that tests for effectiveness as well as safety. This can take from months to years.
      • Phase III: The new intervention is compared against other pre-existing options.
      • Federal Drug Administration Approval: Generally, after phase III, the intervention being studied can be approved (or rejected) to be brought to market.
      • Phase IV: This phase looks at the long-term effects of an intervention after FDA approval.

Alright, so which categories are worth covering?

Haelle provided some general guidelines: Later-phase, randomized, controlled clinical trials are often considered the gold standard of medical studies. But this doesn’t mean you shouldn’t ever cover other kinds of research, like pre-clinical studies.

Haelle offered the example of environmental exposures. An animal study of a certain chemical exposure and the associated effects could be worth covering if there’s human epidemiological evidence (like studies of people exposed to the chemical in drinking water) you could pair it with, too.

“I don’t ever report on it just by itself,” Haelle explained. “I report on it in context, with other research.”

(And don’t forget to specify “in mice!” if the study was conducted in mice!)

Step 2: Assess newsworthiness.

  • Compared with existing research, how new are these findings? “That’s important to know, because if you’re reporting something for the 70th time, then it’s not news,” Haelle said. Remember, the findings don’t have to be positive to be newsworthy — Haelle emphasized that there can be value in covering negative results (i.e., a failed intervention), too. Another question to consider: how different is this intervention from others?
  • How strong are the findings?
    • Are they clinically significant? That is, do they have a practical, noticeable effect in daily life?
    • What is the effect size? That is, how much of an effect does the intervention have? For context, you might compare the effect size of the intervention to that of the standard treatment.
    • Are the findings statistically significant? Statistical significance is generally determined by the p-value of the data. [A brief primer from an earlier JR tip sheet on statistics: “P-values quantify the consistency of the collected data as compared to a default model which assumes no relationship between the variables in question. If the data is consistent with the default model, a high p-value will result. A low p-value indicates that the data provides strong evidence against the default model. In this case, researchers can accept their alternative, or experimental, model. A result with a p-value of less than .05 often triggers this. Lower p-values can indicate statistically significant results, but they don’t provide definitive proof of the veracity of a finding.”]
    • Ideally, findings are both clinically and statistically significant, but depending on the sample size, an intervention could be clinically but not statistically significant. “You really need to consider not just whether it’s statistically significant — not whether the findings are real just as a result of coincidence but whether they actually have clinical relevance, whether this is going to change practice,” Haelle advised.

Step 3: Evaluate the methodology.

  • How big was the study? In a smaller sample, outliers — extreme data points at either end of the spectrum — have a bigger effect on the overall results. For example, an 11-foot beanstalk in a patch with two two-foot beanstalks would yield an average height of 5 feet per beanstalk. But if the 11-foot beanstalk is in a bigger patch of 20 two-foot beanstalks, the average height is 2.14 feet. Suddenly Jack’s patch of beanstalks is cut down to size.
  • How long did the study last? “If it’s a diet study and it only lasted five days, don’t even bother,” Haelle said.
  • How were effects measured? Haelle gave the example of a study measuring the effects of an intervention on stress levels – there are a number of measures that one could look at to gauge effects, such as blood pressure, cortisol levels and self-reported stress levels. Consider the nuances of the different measures and what they may or may not convey. “You want to think about when they say a drug is improving something or decreasing something, think about whether they’re actually measuring what’s important,” Haelle said.
  • Who participated in the study? Was there a control group? Were the groups randomized?
  • Who funded the study? Is a study claiming pasta helps people lose weight funded by a pasta manufacturer, for example? While industry-funded research can be unbiased, some studies have found that pharmaceutical industry-funded clinical trials were likely to have pro-industry results. So keep that in mind, and be sure note the funding sources in your writing if there could be conflicts.

If you think you’ve found a winner, get reporting! And if you’d like more guidance, check out our tip sheets on how to write about health research and how to conquer your fear of statistics. Also, we’ll spare you the learning curve with these 10 things we wish we’d known earlier about research.

Haelle recommended other resources, including the Association of Health Care Journalists’ resources for covering medical research, Health News Review’s toolkit, The Open Notebook, and Christie Aschwanden’s “Science Isn’t Broken” feature for FiveThirtyEight.

About The Author