Deepfake Danger: Why AI Deception Isn’t Just a Celebrity Problem

By: Olivia Frantzeskos

An AI-generated “deepfake” video surfaced last month depicting images of celebrities — including the likes of Scarlett Johansson, Lenny Kravitz, Ben Stiller, and Adam Sandler – protesting Kanye West’s now-viral antisemitic comments.

The celebrities are shown each wearing a white T-shirt featuring the back of a hand with a Jewish star on it, with a middle finger raised, and the name Kanye written underneath — making it abundantly clear who the gesture is aimed at. The video was created and distributed without permission from the celebrities to use their likenesses. While some of the celebrities depicted have previously spoken out against West, this video has also prompted criticism from Johansson over the “misuse of AI” and sparked a larger conversation about the dangers AI poses for individuals regardless of the original intention.

Johansson said in a statement to media outlets: “I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one person who takes accountability for it. We must call out the misuse of AI, no matter its messaging, or we risk losing a hold on reality.”

The dangers of AI and the misuse of this evolving technology can impact anyone. Developments in “deepfake” technology, AI-generated images or videos that use deep learning techniques to manipulate or superimpose an existing photo or video onto another, can create realistic but fabricated images of people saying or doing things they never actually did, posing serious risks to personal privacy, reputation, and even public safety. So what can ordinary people (who aren’t Hollywood stars) do to protect themselves?  Of course, legal protections may be available, such as a claim of defamation or commercial misappropriation. However, the reality for many individuals is that legal remedies often require hiring lawyers and filing a lawsuit, and most people can’t afford to do so. However, there are some independent steps people can take to protect themselves from false AI-generated content

People who discover false “deepfake” content depicting themselves can report it to the platform hosting the content. This may help remove or investigate it, limiting its potential reach. They can also report it to law enforcement authorities – in particular, deepfake pornography is illegal in many localities. Social media platforms will typically allow takedown requests.  Most sites have community guidelines, which contain information on what content is not allowed on their platforms and which behaviors will lead to accounts being suspended or permanently deleted. For example, TikTok provides detailed information for its users stating how to report a post as misinformation and/or manipulated media. However, a platform’s willingness to remove deepfake content may depend on what it contains. Obscene content or blatant misinformation about an area affecting the public (e.g., medical information) is more likely to be removed. In contrast, a platform may be less likely to remove non-offensive content claimed to be personally defamatory or falsely depict someone saying or doing something. 

But are these protections enough? While social media platforms may attempt to curb reported deepfakes, detection technology is still catching up, making enforcement sometimes inconsistent. Ordinary people with limited resources are privy to imperfect detection tools, and legal recourse may be prohibitively expensive, making specific solutions less accessible. Additionally, platforms may refuse to remove content that the victim considers harmful but does not violate the platform’s guidelines.  The controversy surrounding the AI antisemitism video demonstrates that a broader conversation is necessary on regulatory protections that are currently lacking to protect all people from the spread of misinformation and false and obscene content through AI-generated “deepfake” videos and images.