Deepfake ads featuring Jenna Ortega ran on Meta platforms. Big Tech needs to fight this.

The harrowing crisis of deepfakes continues, once again targeting women.
By Meera Navlakha  on 
Jenna Ortega attends The 2023 Met Gala.
Credit: Jamie McCarthy/Getty Images

The crisis of deepfakes continues. Meta platforms, including Instagram, Facebook, and Messenger, reportedly hosted AI-generated ads depicting Wednesday actor Jenna Ortega as undressed.

As reported by NBC News, the ads used a blurred picture of Ortega taken when she was 16 years old, and instructed users on how they could change the celebrity's outfit, including an option for removing all clothes. The images were reportedly manipulated by an app called Perky AI, listed as developed by company RichAds, which is described in Apple's App Store as a platform that uses AI to "create super-realistic or fantasy-like persons" with prompts. This includes "NSFW" (not safe for work) images, which are typically sexually explicit.

Following NBC's report, the publisher says the Perky AI app's page was suspended by Meta, having already run 260 unique ads on the company's platforms since September — those featuring Ortega's image reportedly ran throughout the month of February. Of the ads that had run on Meta's platforms, the news outlet says 30 were already suspended for not meeting the company's advertising standards, but the ads featuring Ortega were not among these.

In a statement to Mashable, Meta spokesperson Ryan Daniels said, "Meta strictly prohibits child nudity, content that sexualizes children, and services offering AI-generated non-consensual nude images. While this app remains widely available on various app stores, we’ve removed these ads and the accounts behind them."

Perky AI also appears to have been removed on Apple's App Store and Google Play (Mashable checked and it's not available on either). Apple told NBC the app had been taken down on Feb. 16, having already been under investigation by the company for violating its policies around "overtly sexual or pornographic material".

Mashable has reached out to Apple and Google for further comment.

This incident is the latest in a slew of nonconsensual, sexually explicit deepfakes being circulated on the internet. In the first two months of 2024, pictures of celebrities like Taylor Swift and podcast host Bobbi Althoff have spread across major social media platforms, including X, formerly known as Twitter. Deepfakes have also infiltrated schools, with fake nudes of students recently making their way around a Beverly Hills middle school and a high school in suburban Seattle.

The issue is at a critical point, with experts warning that legal and societal change is urgently needed. Subsum, an identity verification platform, found that detection of deepfakes increased 10 times between 2022 and 2023. Many social media platforms have struggled to contain this type of content: in the past few months alone, Google, X, and Meta have been called out for allowing deepfake material to circulate on their platforms.

If users see an ad on any platform that they believe needs to be reported, company guides feature directions on how to do so. Meta allows users to report ads on Facebook or Instagram, while Apple features several community threads that help users report ads that are in-app, for example. Google also helps users report inappropriate ads via a form.

But some of these steps aren't enough to stop the proliferation of AI-generated content. Big tech needs to take significant action to tackle what seems to be becoming an epidemic, most often targeting girls, women, and marginalized people.

If you have experienced sexual abuse, if you are based in the U.S., call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org. If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.

If you are based in the UK and have experienced intimate image abuse (aka revenge porn), you can contact the Revenge Porn Helpline on 0345 6000 459. If you have experienced sexual violence and are based in the UK, call the Rape Crisis helpline 0808 802 9999.

Mashable Image
Meera Navlakha
Culture Reporter

Meera is a Culture Reporter at Mashable, joining the UK team in 2021. She writes about digital culture, mental health, big tech, entertainment, and more. Her work has also been published in The New York Times, Vice, Vogue India, and others.


Recommended For You
How to watch UK Netflix from anywhere in the world
Netflix on laptop

X makes Taylor Swift's name unsearchable amid viral deep fakes
Taylor Swift in a black coat in New York City

What to do if someone makes a deepfake of you
Illustration of a woman whose face appears to be digitally manipulated.

The (very) brief Oscars history of women nominated for Best Director
a collage of the female directors and the films that got them Best Director nominations. Caption reads: 96 years (in yellow font), 8 women (in white)

The best VPN for ITVX
Couple watching laptop on bed

Trending on Mashable
NYT Connections today: See hints and answers for March 8
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 9
a phone displaying Wordle

NYT Connections today: See hints and answers for March 9
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 8
a phone displaying Wordle

NYT's The Mini crossword answers for March 8
Closeup view of crossword puzzle clues
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!