PoliticsDisinformation Desk

Actions

Viral celebrity deepfake ad warns of AI being used 'trick you into not voting'

A video with more than 6 million views on YouTube is warning voters to pay closer attention to what they see and hear online.
A hotel's window reads, "Let's get real about AI."
Posted

Election interference is increasingly relying on artificial intelligence and deepfakes. That's why one viral public service ad is using them as a warning sign.

"This election bad actors are going to use AI to trick you into not voting," the ad says. "Do not fall for it. This threat is very real."

The "Don't Let AI Steal Your Vote" video features Hollywood stars like Rosario Dawson, Amy Schumer, Chris Rock and Michael Douglas. But many of them aren't real. Douglas, Rock and Schumer, for example, are deepfakes.

"The artists who are involved in this were super enthusiastic about doing it," Joshua Graham Lynn, CEO and Cofounder of RepresentUs, the national and non-partisan anti-corruption organization behind the video, told Scripps News.

"Everybody that you see there either gave us their likeness or performed in person volunteer. They all were super excited to do it to help get out the vote because they know this is a really important election," Lynn added.

RELATED STORY | Scripps News got deepfaked to see how AI could impact elections

The video, which has amassed over 6 million views on YouTube, warns voters to pay closer attention to what they see and hear online.

"If something seems off it probably is," the real-life Rosario Dawson says in the video.

"Right now, it's so hard to tell what's real and fake on the Internet," Lynn said. "You just look at any new video, and you sometimes can't tell if it's just been made completely by AI."

"The technology is moving fast, and more importantly, malicious actors are always going to be at the forefront," he added.

Disinformation experts and community leaders have called out AI-generated content being used to sow chaos and confusion around the election. The Department of Homeland Security, ABC News previously reported, warned state election officials that AI tools could be used to "create fake election records; impersonate election staff to gain access to sensitive information; generate fake voter calls to overwhelm call centers; and more convincingly spread false information online."

"And so what we want is for voters to use their brains," Lynn said. "Be skeptical if you see something telling you not to participate. If you see something about a candidate that you support, question it. Double-check it."

While deepfakes could be used to spread election disinformation, experts warn they could also be used to obliterate the public's trust in official sources, facts, or their own instincts.

"We have situations where we're all starting to doubt the information that we're coming across, especially information related to politics," Purdue University Professor and Kaylyn Jackson Schiff told Scripps News. "And then with the election environment that we're in, we've seen examples of claims that real images are deepfakes."

Schiff said this phenomenon, this widespread uncertainty, is part of a concept called "the liar's dividend."

"Being able to credibly claim that real images or videos are fake due to widespread awareness of deepfakes and manipulated media," she said.

RELATED STORY | San Francisco sues websites used to create deepfake nudes of women and girls

Schiff, who is also the co-director of Purdue's Governance and Responsible AI Lab, and Purdue University PhD candidate Christina Walker have tracked political deepfakes since June 2023, capturing over 500 instances in their Political Deepfakes Incidents Database.

"A lot of the things that we capture in the database, the communication goal is actually for satire, so almost more similar to a political cartoon," Walker told Scripps News. "It's not always because everything is very malicious and intended to cause harm."

Still, Walker and Schiff say some of the deepfakes mean "reputational harm," and even parody videos meant for entertainment can take on a new meaning if shared out of context.

"It's still a concern that some of these deepfakes that are initially propagated for fun could deceive individuals who don't know the original context if that post is then shared again later," Schiff said.

While the deepfakes in the "Don't Let AI Steal Your Vote" video are hard to spot, Scripps News took a closer look and found visual artifacts and shadows disappearing. The technology of deepfakes has improved, but Walker said for now there are still tell-tale signs.

"This can be extra fingers or missing fingers, blurry faces, writing in the image, not being quite right or not lining up. Those things can all indicate that something is a deepfake," Walker said. "As these models get better, it does become harder to tell. But there are still ways to fact-check it."

Fact-checking a deepfake or any video that triggers an emotional response, especially around the election, should start with official sources like secretaries of state or vote.gov.

"We encourage people to search for additional sources of information, especially if it's about politics and close to an election," Schiff said. "As well as just generally thinking about who the source of the information is and what motivations they might have in sharing that information."

"If there's anything telling you as a voter, 'Don't go to the polls. Things have been changed. There's a disturbance. Things have been delayed. You can come back tomorrow,' double-check your sources. That's the most important thing right now."