Skip to main content
Voting stickers on a table.

How to Spot AI Deepfakes that Spread Election Misinformation

Media Inquiries
Name
Cassia Crogan
Title
University Communications & Marketing

Generative AI systems, such as ChatGPT, are trained on large datasets to create written, visual or audio content in response to prompts. When fed real images, some algorithms can produce fake photos and videos known as deepfakes(opens in new window)

Content created with generative artificial intelligence (AI) systems are playing a role in the 2024 presidential election. While these tools can be used harmlessly, they allow bad actors to create misinformation more quickly and realistically than before, potentially increasing their influence on voters. 

Domestic and foreign adversaries can use deepfakes and other forms of generative AI to spread false information about a politician’s platform or doctor their speeches, said Thomas Scanlon(opens in new window), principal researcher at Carnegie Mellon University’s Software Engineering Institute(opens in new window) and an adjunct professor at its Heinz College of Information Systems and Public Policy. 

“The concern with deepfakes is how believable they can be, and how problematic it is to discern them from authentic footage,” Scanlon said. 

 

Voters have seen more ridiculous AI-generated content — such as a photo of Donald Trump appearing to ride a lion(opens in new window) — than an onslaught of hyper-realistic deepfakes full of falsehoods, according to the Associated Press(opens in new window). Still, Scanlon is concerned that voters will be exposed to more harmful generative content on or shortly before Election Day, such as videos depicting poll workers saying an open voting location is closed. 

That sort of misinformation, he said, could prevent voters from casting their ballots because there will be little time to correct the false information. Overall, AI-generated deceit could further erode voters’ trust in the country’s democratic institutions and elected officials, according to the university’s Block Center for Technology and Society(opens in new window), housed in the Heinz College of Information Systems and Public Policy(opens in new window)

“People are just constantly being bombarded with information, and it's up to the consumer to determine: What is the value of it, but also, what is their confidence in it? And I think that's really where individuals may struggle,” said Randall Trzeciak(opens in new window), director of the Heinz College Master of Science in Information Security Policy & Management(opens in new window) (MSISPM) program.

Leaps and bounds in generative AI

For years, people have spread misinformation by manipulating photos and videos with tools such as Adobe Photoshop, Scanlon said. These fakes are easier to recognize, and they’re harder for bad actors to replicate on a large scale. Generative AI systems, however, enable users to create content quickly and easily, even if they don’t have fancy computers or software.

People fall for deepfakes for a variety of reasons, faculty at Heinz College said. If the viewer is using a smartphone, they’re more likely to blame a deepfake’s poor quality on bad cell service. If a deepfake echoes a belief the viewer already has — for example, that a political candidate would make the statement depicted — the viewer is less likely to scrutinize it.

Most people don’t have time to fact-check every video they see, meaning deepfakes can sow doubt and erode trust over time, wrote Ananya Sen(opens in new window), an assistant professor of information technology and management at Heinz College, in a statement. He’s concerned that ballot-counting livestreams, while intended to increase transparency, could be used for deepfakes. 

Once the false information is out there, there’s little opportunity to correct it and put the genie back in the bottle. 

Unlike previous means of creating disinformation, generative AI can also be used to send tailor-made messages to online communities, said Ari Lightman(opens in new window), a professor of digital media and marketing at Heinz College. If one member of the community accidentally shares the content, the others may believe its message because they trust the person who shared it.

Adversaries are “looking at consumer behavioral patterns and how people interact with technology, hoping that one of them clicks on a piece of information that might cascade into a viral release of disinformation,” Lightman said.
 
It’s difficult to unmask the perpetrators of AI-generated misinformation. The creators can use virtual private networks(opens in new window) and other mechanisms to hide their tracks. Countries with adversarial relationships with the U.S. are likely weaponizing this technology, Lightman said, but he’s also concerned about individuals and terrorist groups that may be operating under the radar.
 

What voters need to know

People should trust their intuition and attempt to verify videos they believe could be deepfakes, Scanlon said. “If you see a video that's causing you to have some doubt about its authenticity, then you should acknowledge that doubt,” he said. 

Here are a few signs that a video could be a deepfake, according to Scanlon:

  • Jump cuts in the editing.
    •  Today’s AI systems are largely unable to create long deepfake videos from one point of view. The video’s angle may change every few seconds, or the subject may be shown from multiple sides in choppy succession. 
  • Inconsistencies in lighting. 
    • Deepfake videos will often include shadows that, unrealistically, come from more than one direction or are present for only part of the video. Often, the lighting of the video will flicker. 
  • Mismatched reactions.
    •  The video may portray the subject saying something shocking with a straight face, or vice versa. If there are other people in the video, their reactions may not match the subject's message or tone.
  • Discrepancies in skin tone or a lack of facial symmetry.
    •  Viewers might see significant differences in the subject's skin tone, especially if their hands or arms are included in the frame. The subject may have ears or eyes that look disproportionate to each other.  
  • Troubles with glasses and earrings.
    •  Viewers may see that the subject has missing or mismatched earrings or glasses that don’t fit.

The Block Center has compiled a guide(opens in new window) to help voters navigate generative AI in political campaigning. The guide encourages voters to ask candidates questions about their use of AI and to send their elected representatives letters that support stronger AI regulations. 

“An informed voter should take as much time as they need to have confidence in the information that goes into their decision-making process,” Trzeciak said.

Legislative landscape

There is no comprehensive federal legislation regulating deepfakes, and several bills aimed at protecting elections from AI threats have stalled in Congress(opens in new window). Some states have passed laws prohibiting the creation or use of deepfakes for malicious purposes, but not all are explicitly related to election interference. 

The Pennsylvania State Senate has introduced a bill(opens in new window) that would impose civil penalties on those who disseminate campaign advertisements that contain AI-generated impersonations of political candidates, so long as the courts can prove malicious intent. The bill has yet to come to a vote.

The existing laws are not enough to regulate the use of deepfakes, Scanlon said. But, he added, the murky nature of cybercrimes means that any federal regulation will likely be difficult to enforce.

“Enforcement will probably look like making examples of folks and groups periodically to act as a deterrent,” Scanlon said. 

Beyond implementing and enforcing regulations, Lightman said the country needs to address the political polarization and diminished societal trust in institutions that allows misinformation to catch fire. 

“Everything we look at is either sarcasm or completely false propaganda. And we don't trust each other,” he said. “We have to go back to having a social understanding of how what we're engaged with is eroding trust. If we can understand that, maybe we can take steps to reverse it.”

— Related Content —