Expert Commentary

5 tips for covering racial bias in health care AI

It’s important for journalists to take a nuanced approach to reporting about AI in order to unearth inequity, highlight positive contributions and tell patients’ individual stories in the context of the broader research.

Gerd Altmann from Pixabay

The role of artificial intelligence is growing in health care, yet many patients have no idea their information is coming into contact with algorithms as they move through doctor appointments and medical procedures. While AI brings advancements and benefits to medicine, it can also play a role in perpetuating racial bias, sometimes unbeknownst to the practitioners who depend on it. 

It’s important for journalists to take a nuanced approach to reporting about AI in order to unearth inequity, highlight positive contributions and tell patients’ individual stories in the context of the broader research.

For insight on how to cover the topic with nuance, The Journalist’s Resource spoke with Hilke Schellmann, an independent reporter who covers how AI influences our lives and a journalism professor at New York University, and Mona Sloane, a sociologist who studies AI ethics at New York University’s Center for Responsible AI. Schellmann and Sloane have worked together on crossover projects at NYU, although we spoke to them separately. This tip sheet is a companion piece to the research roundup “Artificial intelligence can fuel racial bias in health care, but can mitigate it, too.”

1. Explain jargon, and wade into complexity.

For beat journalists who regularly cover artificial intelligence, it can feel as though readers should understand the basics. But it’s better to assume audiences aren’t coming into every story with years of prior knowledge. Pausing in the middle of a feature or breaking news to briefly define terms is crucial to carrying readers through the narrative. Doing this is especially important for terms such as “artificial intelligence” that don’t have fixed definitions.

As noted in our research roundup on racial bias in health care algorithms, the term “artificial intelligence” refers to a constellation of computational tools that can comb through vast troves of data at rates far surpassing human ability, in a way that can streamline providers’ jobs. Some types of AI commonly found in health care already are:

  • Machine learning AI, where a computer trains on datasets and “learns” to, for example, identify patients who would do well with a certain treatment
  • Natural language processing AI, which can identify the human voice, and might transcribe a doctor’s clinical notes
  • Rules-based AI, where computers train to act in a specific way if a particular data point shows up–these kinds of AI are commonly used in electronic medical records to perhaps flag a patient who has missed their last two appointments.

Sloane advises journalists to ask themselves the following questions as they report, and to include the answers in their final piece of journalism: Is [the AI you’re describing] a learning- or a rule-based system? Is it computer vision technology? Is it natural language processing? What are the intentions of the system and what social assumptions is it based on?

Another term journalists need to clarify in their work is ‘bias,’ according to Sloane. Statistical bias, for example, refers to a way of selectively analyzing data that may skew the story it tells, whereas social bias might refer to the ways in which perceptions or stereotypes can inform how we see other people. Bias is also not always the same as outright acts of discrimination, although it can very often to lead to them. Sloane says it’s important to be as specific as possible about all of this in your journalism. As journalists work to make these complex concepts accessible, it’s important not to water them down.

The public “and policymakers are dependent on learning about the complex intersection of AI and society by way of journalism and public scholarship, in order to meaningfully and democratically participate in the AI discourse,” says Sloane. “They need to understand complexity, not be distracted from it.”

2. Keep your reporting socially and historically contextualized.

Artificial intelligence may be an emerging field, but it intertwines with a world of deep-seated inequality. In the healthcare setting in particular, racism abounds. For instance, studies have shown health care professionals routinely downplay and under-treat the physical pain of Black patients. There’s also a lack of research on people of color, in various fields such as dermatology.

Journalists covering artificial intelligence should explain such tools within “the long and painful arc of racial discrimination in society and in healthcare specifically,” says Sloane. “This is particularly important to avoid complicity with a narrative that sees discrimination and oppression as purely a technical problem that can easily be ‘fixed.’”

3. Collaborate with researchers.

It’s crucial that journalists and academic researchers bring their relative strengths together to shed light on how algorithms can work to both identify racial bias in healthcare and also to perpetuate it. Schellmann sees these two groups of people as bringing unique strengths to the table that make for “a really mutually interesting collaboration.”

Researchers tend to do their work on much longer deadlines than journalists, and within academic institutions researchers often have access to larger amounts of data than many journalists. But academic work can remain siloed from public view due to esoteric language or paywalls. Journalists excel at making these ideas accessible, including human stories in the narrative, and bringing together lines of inquiry across different research institutions.

But Sloane  does caution that in these partnerships, it is important for journalists to give credit: While some investigative findings can indeed come from a journalist’s own discovery—for example, self-testing an algorithm or examining a company’s data—if an investigation really stands on the shoulders of years of someone else’s research, make sure that’s clear in the narrative. 

“Respectfully cultivate relationships with researchers and academics, rather than extract knowledge,” says Sloane. 

For more on that, see “9 Tips for Effective Collaborations Between Journalists and Academic Researchers.”

4. Place patient narratives at the center of journalistic storytelling.

In addition to using peer-reviewed research on racial bias in healthcare AI, or a journalist’s own original investigation into a company’s tool, it’s also important journalists include patient anecdotes.

“Journalists need to talk to people who are affected by AI systems, who get enrolled into them without necessarily consenting,” says Schellmann.

But getting the balance right between real stories and skewed outliers is important. “Journalism is about human stories, and these AI tools are used upon humans, so I think it’s really important to find people who have been affected by this,” says Schellmann. “What might be problematic [is] if we use one person’s data to understand that the AI tool works or not.”

Many patients are not aware that healthcare facilities or physicians have used algorithms on them in the first place, though, so it may be difficult to find such sources. But  their stories can help raise awareness for future patients about the types of AI that may be used on them, how to protect their data and what to look for in terms of racially biased outcomes.

Including patient perspectives may also be a way to push beyond the recurring framing that it’s simply biased data causing biased AI.

“There is much more to it,” says Sloane. “Intentions, optimization, various design decisions, assumptions, application, etc. Journalists need to put in more work to unpack how that happens in any given context, and they need to add human perspectives to their stories and talk to those affected.”

When you find a patient to speak with, make sure they fully consent to sharing their sensitive medical information and stories with you.

5. Stay skeptical.

When private companies debut new healthcare AI tools, their marketing tends to rely on validation studies that test the reliability of their data against an industry gold standard. Such studies can seem compelling on the surface, but Schellmann says it’s important for journalists to remain skeptical of them. Look at a tool’s accuracy, she advises. It should be 90% to100%. These numbers come from an internal dataset that a company tests a tool on, so “if the accuracy is very, very low on the dataset that a company built the algorithm on, that’s a huge red flag,” she says.

But even if the accuracy is high, that’s not a green flag, per se. Schellmann thinks it’s important for journalists to remember that these numbers still don’t reflect how healthcare algorithms will behave “in the wild.”

A shrewd journalist should also be grilling companies about the demographics represented in their training dataset. For example, is there one Black woman in a dataset that otherwise comprises white men?

“I think what’s important for journalists to also question is the idea of race that is used in healthcare in general,” adds Schellmann. Race is often used as a proxy for something else. The example she gives is using a hypothetical AI to predict patients best suited for vaginal births after cesarean sections (also known as VBACs). If the AI is trained on data that show women of color having higher maternal mortality rates, it may incorrectly categorize such a patient as a bad candidate for a VBAC, when in fact this specific   patient is a healthy candidate. Maternal mortality outcomes are the product of a complex web of social determinants of health—where someone lives, what they do for work, what their income bracket is, their level of community or family support, and many other factors—in which race can play a role; but race alone does not shoehorn a person into such outcomes.

About The Author