The deepfakes of Trump and Biden that you are most likely to fall for

Experiments show that viewers can usually identify video deepfakes of famous politicians – but fake audio and text are harder to detect.

People can generally spot when videos of famous politicians giving speeches are actually AI-generated deepfakes. But we have more trouble discerning counterfeits from reality when listening to audio or reading supposed text transcripts.

This is a real photo of Joe Biden giving a speech
SHAWN THEW/EPA-EFE/Shutterstock


“Audio deepfakes are, in my opinion, a little more dangerous in the current time because visual deepfakes are still harder to create,” says Aruna Sankaranarayanan at the Massachusetts Institute of Technology.

Sankaranarayanan and her colleagues collected text transcripts, audio and video of political speeches given by US presidents Joe Biden and Donald Trump. They also created deepfake speeches and misattributed text quotes that had actually come from different public figures – the text was edited to match the speaking style of either Biden or Trump. Then the researchers ran five randomised experiments that exposed more than 2200 study participants to a mix of the authentic and artificial content.


People had the most trouble identifying fake political speeches written as text transcripts, calling out the false information with just 57 per cent accuracy. Their detection accuracy rose to 69 per cent when viewing silent videos without audio or subtitles, 81 per cent for audio only and 86 per cent for video deepfakes with audio.


One of the five experiments gauged responses to sophisticated video deepfakes that obtained their audio in different ways: either AI-powered voice cloning technology or human actors doing impressions of the politicians. The fakes with AI voices were harder to detect compared with the ones that had human audio.


Another experiment surveyed people’s reactions to the speeches without directly asking them about authenticity. This revealed if participants could spot synthetic media in a scenario similar to real-world situations, says Siwei Lyu at the University at Buffalo in New York, who was not involved with the study. “Under such circumstances, deepfakes sneak in without the viewers’ awareness,” he says. Participants in this experiment still raised doubts about the fake speeches without prompting – especially for more complex media formats. Again, suspicions were lowest for the text transcripts and highest for videos with audio.


“The authors meaningfully point out how text – a domain for which we have more limited interventions for authenticating content than for visual content – may actually be more effective for misleading people than visuals,” says Claire Leibowicz at the Partnership on AI, a non-profit organisation based in San Francisco.


For future studies, says Leibowicz, it is important to also study deepfakes of less well-known political candidates or other public figures. And she says it is important that tech companies share more data on how people interact with fake or real content on social media platforms.


Journal reference:

 Nature Communications DOI: 10.1038/s41467-024-51998-z

Post a Comment

Last Article Next Article