Expert Commentary

Deepfake technology is changing fast — use these 5 resources to keep up

Deepfake videos are becoming easier to make every day. These five resources can help journalists keep up with this fast-changing technology.

(Dorrell Tibbs/Unsplash)

The 17th-century philosopher Rene Descartes once imagined a demon that could make people experience a reality that wasn’t real. Nearly 400 years later, a user on social news platform Reddit conjured realistic pornographic videos featuring Hollywood actresses who weren’t really there.

The user’s handle: deepfake.

Since Vice first profiled that Redditor in late 2017, “deepfake” has come to mean a video that has been digitally manipulated so well that it may be difficult for the average viewer to tell it is fake.

Many deepfakes put someone in a situation that never happened, or show them saying something they never said. Here’s a video that appears to show late-night television host Matt Damon telling viewers he ran out of time for his last guest, Matt Damon:

In the example above, a deepfaker put Damon’s face onto late-night TV host Jimmy Kimmel’s body in an ironic spin on a gag where Kimmel always ran out of time for Damon. Last year, BuzzFeed ran a deepfake app for 12 hours to make a video of former President Barack Obama saying some out-of-character things.

Today, deepfakes are even easier to produce. A deepfake can be made with just a few photographs. Tools and methods for creating realistic but fake audio have also cropped up.

The technology behind deepfakes (which we explore more below) can be used for good. People with disabilities that make it difficult to speak may find new and improved voices. But for now, the potential harm seems likely to outweigh any potential good. Anti-domestic violence advocates warn that deepfakes could be used for intimate partner abuse — by putting an ex-partner in a pornographic video and distributing it, for example.

Top artificial intelligence researchers — never mind journalists — are having a hard time keeping up with deepfake technology, according to a report from early June in the Washington Post. Recent news coverage has centered on how deepfakes might shake up the 2020 presidential election. Reuters is training its journalists to recognize deepfakes. And 77% of Americans want restrictions on publishing and accessing deepfakes, according to the Pew Research Center.

Computer technology often moves much faster than peer-reviewed journals, which can take months or years to publish academic articles. Arxiv, an open-access research repository out of Cornell University, is one place to find current deepfake research. Papers posted there aren’t peer reviewed, but they are vetted by university researchers before they’re published.

Before we get too deep into recent deepfake research, here are five key resources to know about:

  • Arxiv-sanity is a search tool good for sifting through Arxiv papers by topic, popularity and publish date.
  • The AI Village Slack channel is open to the public and often includes discussions on recent deepfake advances. AI Village is “a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy.”
  • The Tracer weekly newsletter tracks trends in synthetic media like deepfakes.
  • The r/SFWdeepfakes subreddit has examples of deepfakes that do not contain potentially offensive content. Subreddits are pages within Reddit devoted to specific topics.
  • Since last year the WITNESS Media Lab — a collaboration with the Google News Initiative — and George Washington University have convened media forensics experts to explore deepfakes, mostly how to detect them. Their research is another valuable starting point.

One last bit of irony: despite the growing body of research on how to identify and combat malignant deepfakes, manipulated videos may not even need to be that good to spread misinformation. So-called cheapfakes — like the doctored video that popped up on social media in May purporting to show a drunk U.S. House Majority Leader Nancy Pelosi — don’t need to be sophisticated to be believed.

“Sometimes we set the bar too high on the effectiveness of media manipulation techniques, expecting that high fidelity equates to impact,” says Joan Donovan, director of the Technology and Social Change Research Project at the Harvard Kennedy School’s Shorenstein Center, where Journalist’s Resource is also housed. “However, small changes to pre-existing video, audio and images present the same challenges for audiences.”

“It’s not that we are fooled, but that we want to believe in the integrity of evidence that images and audio provide as documentation of an event. As they say in Internetland, ‘Pics or it didn’t happen.’”

With that phrase — “integrity of evidence” — in mind, here’s some of the recent research on deepfakes.

Advances in deepfake technology

Generative Adversarial Nets

Ian J. Goodfellow; et. al. Neural Information Processing Systems Conference, 2014.

In this paper, presented at the 2014 NeurIPS Conference, the authors describe generative adversarial networks, or GANs, the technology that makes deepfakes so realistic. This is the paper that started it all.

GANs work by pitting two artificial intelligence computer models against each other. It’s sort of like counterfeiting currency, the authors explain. Imagine a counterfeiter trying to create fake currency that looks real. There are also police, who try to detect the fake currency. The goal is to trick the police.

To produce deepfakes, one computer model acts like the counterfeiter and tries to create an artificial face based on example images. The other model acts like the police and compares the artificial productions to the real images and identifies places where they diverge. The models go back and forth many times until the artificial image is practically identical to the original.

The big breakthrough with GANs is that they allow computers to create. Before GANs, artificial intelligence algorithms could classify images, but had a harder time creating them.

Everybody Dance Now

Chan, Caroline; Ginosar, Shiry; Zhou Tinghui; Efros, Alexei A. Arxiv, August 2018.

In this paper, researchers from the University of California, Berkeley show how motions in a source video can be transferred to target people in another video. The method creates stick figures of the source dancer and then builds the target onto the stick figure. The target appears to perform the moves originally performed in the source video.

The results are imperfect. Target subjects appear jittery and faces are sometimes blurred. Still, this research indicates where deepfake technology is headed. Spoiler alert: it’s not just face-swapping. Realistic motion manipulation is also on the horizon.

A Style-Based Generator Architecture for Generative Adversarial Networks

Karras, Tero; Laine, Samuli; Aila, Timo. Arxiv, March 2019.

What if the person in that picture wasn’t really a person at all? In this paper, researchers from video game company Nvidia improve on techniques that produce convincing images of non-existent people.

Real images of fake people have been around for a few years, but the breakthrough in this paper is that human users controlling the image generator can edit aspects of fake images, like skin tone, hair color and background content.

The authors call their approach “style-based generation.” Using source images, their generator identifies styles such as pose and facial features to produce an image of a fake person. Real people controlling the generator can then change the styles to adjust how the fake person looks. The authors also apply the technique to images of cars and hotel rooms.

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Zakharov, Egor; et al. Arxiv, May 2019.

Talking heads used to be tough to fake. Individual faces are complex, and the shape and contour of faces differ widely across people. Just a few years ago, an algorithm might need hundreds or thousands of source images to create a somewhat realistic deepfake.

In this paper, researchers from the Samsung AI [Artificial Intelligence] Center and Skolkovo Institute of Science and Technology, both in Moscow, created talking head videos using an algorithm that learns from just eight original images. The quality of the deepfake improves with more original images, but the authors show that far fewer source images and far less computing power is now needed to produce fake talking head videos.

The authors also show it is possible to animate still images. They bring to life famous headshots of Marilyn Monroe, Salvador Dali, Albert Einstein and others.

Legal status and judicial challenges

Though deepfake technology is advancing quickly, it’s not yet at a point where fake and real videos are totally indistinguishable. Still, it’s not difficult to imagine that the technology may soon speed past the point of no return into a videographic future where fiction and reality converge.

U.S. Rep. Yvette Clarke (D-N.Y.) introduced legislation in June 2019 that would require deepfake producers to include a digital watermark indicating that the video contains “altered audio or visual elements.” Other federal efforts to address deepfakes haven’t gained much traction. U.S. Sen. Ben Sasse (R-Neb.) introduced the Malicious Deep Fake Prohibition Act in December 2018 but it expired without any co-sponsors.

Pornographic Deepfakes — Revenge Porn’s Next Tragic Act

Delfino, Rebecca. Fordham Law Review, forthcoming.

Rebecca Delfino, clinical professor of law at Loyola Law School, Los Angeles, provides a comprehensive overview of federal and state legislation that could be used to combat deepfakes.

“Daily we are inundated in every space both real and cyber with a barrage of truthful and fake information, news, images, and videos, and the law has not kept pace with the problems that result when we cannot discern fact from fiction,” Delfino writes.

While legislation has been introduced, there is no federal law governing creation or distribution of deepfakes. Federal prosecutors may be able to use legislation related to cybercrime, such as the federal cyberstalking statute. There are no state laws either that specifically deal with deepfakes, Delfino finds. At the state level, like on the federal level, laws related to cyberstalking and revenge porn may be used to prosecute people who produce deepfake pornographic videos.

“A federal law criminalizing pornographic deepfakes would provide a strong and effective incentive against their creation and distribution,” Delfino writes. “The slow, uneven efforts to criminalize revenge porn at the state level over the last decade demonstrates that waiting for the states to outlaw deepfakes will be too long of a wait as the technology becomes more sophisticated and more accessible.”

10 Things Judges Should Know about AI

Ward, Jeff. Bolch Judicial Institute at Duke Law, Spring 2019.

Deepfakes may pose new challenges across the American judiciary. What will happen to an institution that relies on factual records, like video evidence, when those records can be easily faked? Imagine a manipulated video showing property damage before an incident allegedly happened, or deepfake audio of a conspiring CEO, writes Jeff Ward, an associate clinical professor of law at Duke University.

“As damaging as any isolated use of such technology may be, the ubiquitous use of hyper-realistic fakes could also threaten something even more fundamental — our ability to trust public discourse and democratic institutions,” Ward concludes.

Combating malignant deepfakes

Exposing DeepFake Videos By Detecting Face Warping Artifacts

Li, Yuezun; Lyu, Siwei. Arxiv, May 2019.

Today’s deepfakes aren’t perfect. With face-swapping — one face placed on top of another — there is some digital transformation and warping that happens, according to the authors, who are researchers from the University at Albany–State University of New York. In that way, deepfakes already leave a kind of watermark that exposes them as not genuine.

The authors describe a method to identify deepfakes that doesn’t require a large number of real and fake images to teach an algorithm subtle differences between the two. Instead, the authors’ method identifies the telltale warping that, for now, is a dead deepfake giveaway.

Protecting World Leaders Against Deep Fakes

Agarwal Shruti; Farid, Hany; Gu, Yuming; He, Mingming; Nagano, Koki; Li, Hao. Workshop on Media Forensics at the Conference on Computer Vision and Pattern Recognition, June 2019.

Remember that deepfake BuzzFeed made of Obama saying out-of-character things? Well, what if instead of BuzzFeed explaining that it was a deepfake, and instead of the fake Obama making tongue-in-cheek remarks, that deepfake had been created by an anonymous individual who instead had put dangerous words in Obama’s mouth?

“With relatively modest amounts of data and computing power, the average person can create a video of a world leader confessing to illegal activity leading to a constitutional crisis, a military leader saying something racially insensitive leading to civil unrest in an area of military activity, or a corporate titan claiming that their profits are weak leading to global stock manipulation,” the authors write.

The authors focus on deepfakes circulating on the web that use likenesses of U.S. politicians such as Obama, President Donald Trump and former U.S. Secretary of State Hillary Clinton. They describe a digital forensics technique to pinpoint deepfakes based on subtle facial movements pulled from real videos of those politicians.

Combating Deepfake Videos Using Blockchain and Smart Contracts

Hasan, Haya R.; Salah, Khaled. IEEE Access, April 2019.

Blockchain may be one way to authenticate videos, according to the authors, researchers from Khalifa University in Abu Dhabi. A blockchain is a digital ledger in which each time something, like a video, is created or altered it is documented in a way that can’t be manipulated. In the framework the authors propose, a video creator can allow others to request to edit, alter or share the video. Any subsequent changes to the video are documented on the blockchain.

“Our solution can help combat deepfake videos and audios by helping users to determine if a video or digital content is traceable to a trusted and reputable source,” the authors conclude. “If a video or digital content is not traceable, then the digital content cannot be trusted.”

About The Author