Expert Commentary

Visual health misinformation: A primer and research roundup

With rapid advances in technology, it’s becoming easier to create and spread visual content that’s inaccurate, misleading and dangerous. There are similarities and differences between visual health misinformation and other types of visual misinformation.

Two hands holding an iphone against a black background.
Photo by Gilles Lambert on Unsplash

The rapid spread of health misinformation online since the start of the COVID-19 pandemic has spurred more studies and articles on the topic than ever before. But research and data are lagging on the impact and spread of visual health misinformation, including manipulated photos, videos and charts, according to a commentary published in August 2022 in Science Communication.

Visual misinformation is “visual content that is typically presented alongside text or audio that contributes to false or inaccurate presentations of information,” says Katie Heley, the commentary’s lead author and a cancer prevention fellow in the Health Communication and Informatics Research Branch of the National Cancer Institute.

With rapid advances in technology and proliferation of social media, mobile devices and artificial intelligence, it’s becoming easier to create and spread visual content that’s inaccurate and misleading, Heley and her colleagues write in “Missing the Bigger Picture: The Need for More Research on Visual Health Misinformation.”

Moreover, existing content moderation tools are mostly designed to catch misinformation in texts and not images and videos, making it difficult to catch and stop the spread of visual misinformation.

Researchers are working to better understand how widespread visual misinformation is and how it impacts consumers compared with misinformation spread via the written word. Recent studies, including a 2021 paper published in International Journal of Press/Politics, find that much COVID-19 misinformation contains visual elements, as does misinformation in general.

It’s important to pay attention to visual health misinformation, because failing to do so may undermine efforts to fully understand health misinformation in general and hamper efforts to develop effective solutions to fight it, according to Heley and her colleagues.

Journalists also have an important role. They are encouraged to be mindful of the existence of manipulated health images, videos and charts, and may want to inform their audience about the issue, “so that, hopefully, the public is a little more aware that it’s relatively easy now to edit or fabricate images,” says Heley.

Below is a primer on visual health misinformation and a roundup of relevant and noteworthy research.

Three categories of visual misinformation

Visual misinformation is not a new concept. For decades, researchers, particularly in the fields of communication, journalism and advertising, have been studying visual misinformation types and techniques, including photo manipulation, misleading images and visual deception. But recent advances in technology, including software and apps and proliferation of social media platforms have made image manipulation easier and more accessible. This is especially dangerous in the health space, hampering public health efforts like vaccination campaigns.

“If we just quickly think through the history of photography, once it became available, it became a tool of truth telling, of showing what a war really looked like,” says Stefanie Friedhoff, associate professor of the Practice of Health Services, Policy and Practice at Brown University. “That instinct has been built over almost 100 years, and all of a sudden, we find ourselves in an environment where pictures may or may not tell the truth. We’re just at the onset of understanding how much images are being manipulated and are being used to manipulate people.”

Visual content is manipulated in different ways. In their commentary, Heley and her colleagues break visual misinformation into three categories:

Visual recontextualization is the use of visual content that is untouched, unedited, and generally authentic, but is presented alongside text or audio that provides inaccurate or misleading information or context.

One example the authors write about in their commentary is a video, posted online, that implied athletes were collapsing after being vaccinated against COVID-19. The video depicted real incidents of athletes fainting or falling for reasons other than vaccination, but it was paired with text suggesting these incidents were the result of COVID-19 vaccination, to create a false narrative. Several athletes in the video had not yet been vaccinated against COVID-19 or, if they had been vaccinated, the fainting or falling was found to be unrelated, according to the authors..

Visual recontextualization could also occur with data visualizations, such as a graph presented with a misleading title.

The second category is visual manipulation. It refers to visual content that’s been modified by tools such photo editing software to impact how the image is interpreted. It also includes distorting charts and graphs by manipulating elements such as the axes.

“Of the three categories of visual misinformation, visual manipulations are likely the most wide-ranging in terms of the strategies used (e.g., enhancing image quality and changing image elements) and the goals of the creator,” the authors write.

One example is the fashion industry’s extensive history of manipulating photographs of models to make them appear thinner, a practice now common on social media platforms. The practice could change how viewers perceive realistic body types and how satisfied they are with their own bodies.

Another specific example of visual manipulation is the use of photo-editing techniques to add the fictional label “Center for Global Human Population Reduction” to a picture of the Bill & Melinda Gates Foundation Discovery Center’s signage, which could fool viewers viewers into believing that the center’s stated goal is to support depopulation efforts or to harm the public in some way, the authors write.

Manipulated videos may also be an especially effective tool for health misinformation since the perceived realism of videos generally increases their credibility, the authors note.

The third category, visual fabrication, refers to visual content that’s not real, even though it is likely produced with representations of people, events, or things that make it appear as authentic, legitimate information. Deepfakes fall in this category. Deepfakes are visuals that have been digitally manipulated such that it’s difficult for most viewers to realize that they’re fake.

Visual fabrications include face swapping, lip-syncing or voice synthesis, in which fabricated words or new speech content are merged with video.

The authors note that several deepfake videos portraying political figures such as Former President Barack Obama and Russian President Vladimir Putin have emerged in recent years, showing them giving fictitious addresses. “It is not hard to imagine a video of a public figure being similarly fabricated in the service of health misinformation efforts,” they write. Heley and her colleagues add: “The unique potential of these videos to cause harm — for example, by providing convincing ‘evidence’ to support false claims or co-opting trusted messengers — and the fact that technologies to create deepfakes are becoming more accessible, suggest that visual fabrication merits greater consideration going forward,”

Friedhoff highlighted two other examples of visual misinformation: Memes and well-produced videos that present themselves as “documentaries.”

“We obviously live in the era of the meme,” adds Friedhoff, co-founder and co-director of the Information Futures Lab, a new lab at Brown University working to combat misinformation, outdated communication practices and foster innovation. “A picture is worth a thousand words, or communicates a thousand words, and using that allows those that are trying to spread misinformation to reach people quickly and intuitively.”

There are also documentaries on topics such as COVID-19, which are professionally produced, featuring individuals who say they are experts in the fields, but are in fact messengers of misinformation.

Friedhoff says in many cases, visual misinformation should be called disinformation, because it is intentionally created to manipulate people in a certain way.

“At the same time, we want to be mindful around really distinguishing well-produced manipulative content from honest mistakes, or people discussing their views and issues, which can also come across as misinformation,” says Friedhoff.

Visual misinformation versus visual health misinformation

There are similarities and differences between visual health misinformation and other types of visual misinformation such as political visual misinformation, says Heley.

Visuals may be used as evidence for claims in all kinds of misinformation.

But in misinformation about health and medicine, well-known icons, images and aesthetics such as a person in a lab coat, a medical chart or graph, an anatomical drawing or medical illustration, may be used to mislead an audience.

“The use of images such as these, that are so strongly associated with health, science and medicine, may provide a scientific frame and lend legitimacy to false claims,” says Heley.

Visual content may serve several functions within health misinformation messages, Heley and colleagues write. They include implying inaccurate connections between verbal and visual information; misrepresenting or impersonating authoritative institutions; leveraging visual traditions and conventions of science to suggest the information presented is evidence-based; and providing visual evidence to support a false claim.

A growing threat

Heley and her colleagues list several reasons why it’s important to pay attention to visual misinformation:

  • Visual content is ubiquitous on social media, especially on platforms like Instagram and TikTok.
  • Visual content is powerful in reach and influence. It is engaging and frequently shared. “Research also suggests that compared to written content alone, the addition of visual content enhances emotional arousal and, in some cases, persuasive impact — advantages that are concerning in the context of misinformation,” the authors write.
  • Visual content is more memorable than messages without visual components. Such content captures the attention better, is better understood, remembered longer and is recalled better.
  • Visual content can transcend language and literacy barriers, which “may facilitate the spread of visual misinformation across different cultural and linguistic contexts as well as among individuals with lower literacy levels,” they write.
  • Visual manipulations are hard to detect, so people often don’t notice them or easily overlook them and often accept them as reality. “There’s something called the realism heuristic, particularly with photos and videos,” says Heley. “People may accept visuals as reality and so the misinformation may be especially convincing to people or people may treat it with less scrutiny or with a less critical eye to it than text alone, unfortunately.”
  • Also, people may not be great at detecting manipulated images, and if they do identify a manipulated imaged, they may not be very good at locating what the manipulation is, says Heley.

Meanwhile, visual misinformation detection tools are either not widely used or aren’t available to and accessible to most people. These tools include reverse image searches, detection software, and technological approaches like blockchain to maintain a record of contextual information about a given photo or video.

“A number of these are promising, but the challenge is that they have limitations, with a big one being that none of them offer necessarily complete detection accuracy,” says Heley. “And for a number of them, they need to really be widely adopted to be effective” in thwarting misinformation.

Also, most content moderation technologies are not yet built to tackle visuals, Friedhoff adds.

“One aspect that’s important is that [visual misinformation] is increasingly being used to circumvent moderation technology, which is often word-based,” she says. “So the [artificial intelligence software] that looks through specific words and find potentially questionable content that then gets pushed in front of human content moderators, that is whole lot harder to do for visual materials.”

Audio misinformation is another growing area of threat, where spoken words are altered and manipulated.

“With audio, we’re now where we were with visual misinformation two or three years ago, because we realize there’s so much audio content,” said Friedhoff. “I’m very sure that we’re going to see more research on that.”

Challenges in research

Research on the spread and impact of visual misinformation, and how to counter it, is lagging behind the problem itself.

Existing research on visual health misinformation, and misinformation in general, mostly focuses on identifying and classifying misinformation on specific platforms and on specific topics. There are studies that have shown the presence of health misinformation on social media platforms. But there’s a notable gap in literature about the role of visual content in health misinformation. And almost no work has been done to study the impact of visual misinformation specifically or to develop specific solutions to counter it, Heley and her colleagues write.

There are several reasons for the dearth in studies.

Compared with the written word, visuals are complex, which makes them more difficult to study and compare.

“So, if you think about even a single image, there are a lot of variables,” says Heley. “What’s the type of image? Is it a graphic? Is it a realistic photo? Is it a cartoon? What are the colors? What’s the setting? Are there people? What are their facial expressions?”

And there is a need for more tools to study visual misinformation.

“With texts, we have automated methods such as natural language processing that help to understand text and spoken words,” says Heley. “We don’t always have comparable tools in the visual space. And a lot of researchers may not necessarily have the expertise in visual methods or access to the technology, such as eye-tracking devices, that they need to conduct this kind of research.”

In their commentary, Heley and her colleagues list several future areas of research, including developing and deploying manipulated visual media detectors.

“More research is needed to understand whether certain populations are more likely to be exposed to visual misinformation (e.g., due to the nature of the platforms they use), whether certain groups are more likely to be misled by visual misinformation (e.g., due to lower levels of health, digital, or graph literacy), and whether greater vulnerability to visual misinformation ultimately contributes to disparities in health outcomes,” they write.

Advice to journalists

Know that visual misinformation is a thing and it’s an important part of people’s information diet at this point, Friedhoff says.

When reporting on communities, try to find out what people’s information diet is and what type of content they’ve seen a lot of, she advises. “How can we interpret the moment that we’re in if we’re not connected to the kinds of stories and misinformation that people see?”

Heley encourages journalists to be aware of visual misinformation and to stay as vigilant as possible by using tools such as Google reverse image search to verify content.

“I think visual misinformation will continue to be a concern,” says Heley. “I don’t know exactly how it will change and what shape it will take but I think all of the indications around visual content and then the changes in technology are pointing” toward an increase in visual misinformation.

Research roundup

Beyond (Mis)Representation: Visuals in COVID-19 Misinformation
J. Scott Brennen, Felix M. Simon and Rasmus Kleis Nielsen. International Journal of Press/Politics. January 2021.

Researchers find visuals in more than half of the 96 pieces of online misinformation analyzed explicitly served as evidence for false claims, most of which are mislabeled rather than manipulated.

“Scholars would be well served by attending to the ways that visuals — whether taken out of context or manipulated — can work to ground and support false or misleading claims,” they write.

They also identify three distinct functions of visuals in coronavirus misinformation: “Illustration and selective emphasis, serving as evidence, and impersonating authorities.”

Fighting Cheapfakes: Using a Digital Media Literacy Intervention to Motivate Reverse Search of Out-of-Context Visual Misinformation
Sijia Qian, Cuihua Shen and Jingwen Zhang. Journal of Computer-Mediated Communication, November 2022.

The authors designed a digital media literacy intervention that motivates and teaches users to reverse search news images when they encounter news posts on social media. Their study included 597 participants.

They define “cheapfakes” as out-of-context visual misinformation or visual recontextualization, which is the practice of using authentic and untouched images in an unrelated context to misrepresent reality.

Their finding suggests “while exposure to the intervention did not influence the ability to identify accurately attributed and misattributed visual posts, it significantly increased participants’ intention of using reverse image search in the future, which is one of the best visual misinformation detection methods at the moment.”

Visual Mis- and Disinformation, Social Media, and Democracy
Viorela Dan, et al. Journalism & Mass Communication Quarterly, August 2021.

In this essay, the authors write “(Audio)visual cues make disinformation more credible and can help to realistically embed false storylines in digital media ecologies. As techniques for (audio)visual manipulation and doctoring are getting more widespread and accessible to everyone, future research should take the modality of disinformation, its long-term effects, and its embedding in fragmented media ecologies into account.”

Internet Memes: Leaflet Propaganda of the Digital Age
Joshua Troy Nieubuurt. Frontiers in Communication, January 2021.

The article is an exploration of internet memes as the latest evolution of leaflet propaganda used in digital persuasion. “In the past such items were dropped from planes, now they find their way into social media across multiple platforms and their territory is global,” the author writes.

A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media
Michael Hameleers, et al. Political Communication, February 2020.

Researchers exposed 1,404 U.S. participants to visual disinformation content related to refugees and school shootings. They find “partial evidence that multimodal disinformation was perceived as slightly more credible than textual disinformation.”

“Fact checkers may offer a potential remedy to the uncontrolled spread of manipulated images,” they write. “In line with this, we found that fact checkers can be used to discredit disinformation, which is in line with extant research in the field of misinformation.”

Seeing Is Believing: Is Video Modality More Powerful in Spreading Fake News via Online Messaging Apps?
S. Shyam Sundar, Maria D. Molina and Eugene Cho. Journal of Computer-Mediated Communication, November 2021.

In the study, 180 participants from rural and urban areas in and around Delhi and Patna in India were exposed to fake news via WhatsApp through text, audio or visual messages.

Results show “users fall for fake news more when presented in video form,” the authors write. “This is because they tend to believe what they see more than what they hear or read.”

Images and Misinformation in Political Groups: Evidence from WhatsApp in India
Kiran Garimella and Dean Eckles. Misinformation Review, July 2020.

Researchers collected 2,500 images from public WhatsApp groups in India and find that image misinformation is highly prevalent, making up 13% of all images shared in the groups.

They categorize the image misinformation into three categories: images taken out of context, photoshopped images and memes.

They also developed machine learning models to detect image misinformation. But, “while the results can sometimes appear promising, these models are not robust to [adapt to] changes over time,” they write.

Prevalence of Health Misinformation on Social Media: Systematic Review
Victor Suarez-Lledo and Javier Alvarez-Galvez. Journal of Medical Internet Research, January 2021.

The authors review 69 studies and find “the greatest challenge lies in the difficulty of characterizing and evaluating the quality of the information on social media. Knowing the prevalence of health misinformation and the methods used for its study, as well as the present knowledge gaps in this field will help us to guide future studies and, specifically, to develop evidence-based digital policy action plans aimed at combating this public health problem through different social media platforms.”

A Global Pandemic in the Time of Viral Memes: COVID-19 Vaccine Misinformation and Disinformation on TikTok
Corey Basch, et al. Human Vaccine & Immunotherapeutics, March 2021.

Researchers identified 100 trending videos with the hashtag #covidvaccine, which together had 35 million views. In total, 38 videos “Discouraged a Vaccine” and 36 “Encouraged a Vaccine.”

“While a slightly larger number of posts discouraged versus encouraged a COVID-19 vaccine, the more troubling aspect of the discouraging posts was the display of a parody/meme of an adverse reaction, even before the vaccine was being distributed to the public. We believe this reflects a deliberate and dangerous effort to communicate anti-vaccination sentiment,” the authors write.

Additional reading

Fighting fake news: 5 free (but powerful) tools for journalists
Faisal Kalim. What’s New in Publishing, July 2019.

These 6 tips will help you spot misinformation online
Alex Mahadevan. Poynter, December 2021.

How to Spot Misinformation Online
A free online course by Poynter Institute.

Fighting Disinformation Online: A Database of Web Tools
December 2019, Rand Corporation.

10 ways to spot online misinformation
H. Colleen Sinclair. The Conversation, September 2020.

Fact checking tools
A collection by the Journalist’s Toolbox, presented by the Society of Professional Journalists.

The Media Manipulation Casebook
The Technology and Social Change project at Shorenstein Center for Media, Politics and Public Policy.

About The Author