New Senate bill would provide path for victims of nonconsensual AI deepfakes

The legislation would criminalize the spread of sexualized AI-generated images.
By Chase DiBenedetto  on 
A woman looks off to the side. Dozens of transparent internet browser windows hover over her face.
Credit: Bob Al-Greene / Mashable

Following the viral spread of pornographic AI images of singer Taylor Swift, government leaders are addressing the creation and sharing of sexualized AI-generated deepfakes.

On Jan. 30, a bipartisan group of Senators introduced a new bill that would criminalize the act of spreading nonconsensual and sexualized “digital forgeries” created using artificial intelligence. Digital forgeries are defined as "a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic."

Currently known as the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (or the "Defiance Act"), the legislation would also provide a path to civil recourse for victims who had their images depicted in nude or sexually explicit images. Through the bill, victims could sue "individuals who produced or possessed the forgery with intent to distribute it" or anyone who received the material knowing it was not made with consent, the Guardian reported.

"The volume of deepfake content available online is increasing exponentially as the technology used to create it has become more accessible to the public," wrote judiciary chair Richard J. Durbin and Rep. Lindsey Graham. "The overwhelming majority of this material is sexually explicit and is produced without the consent of the person depicted. A 2019 study found that 96 percent of deepfake videos were nonconsensual pornography."

Senate majority whip Dick Durbin explained in a press release that the bill's quick introduction was spurred explicitly by the viral images of Swift, and the White House's demand for accountability. “This month, fake, sexually explicit images of Taylor Swift that were generated by artificial intelligence swept across social media platforms. Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit 'deepfakes' is very real."

In a statement from White House press secretary Karine Jean-Pierre on Jan. 26, the Biden administration expressed its desire for Congress to address deepfake proliferation amid weak enforcement from social media platforms. “We know that lax enforcement disproportionately impacts women and also girls, sadly, who are the overwhelming targets of online harassment and also abuse," Jean-Pierre told reporters. "Congress should take legislative action."

The creation of deepfake "porn" has been criminalized in other countries and some U.S. states, although widespread adoption is yet to be seen. Mashable's Meera Navlakha has reported on a worsening social media landscape that's disregarded advocates' ongoing demands for protection and accountability, writing, "The alarming reality is that AI-generated images are becoming more pervasive, and presenting new dangers to those they depict. Exacerbating this issue is murky legal ground, social media platforms that have failed to foster effective safeguards, and the ongoing rise of artificial intelligence."

A climate of worsening media literacy — and the steep rise of digital misinformation and deepfake scams — prompts an even greater need for industry action.

If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You

The FCC has decided: Those realistic AI robocalls are illegal.
A person holding a cell phone. The screen shows an incoming call labeled "suspected spam".

Deepfakes of Taylor Swift have gone viral. How does this keep happening?
Taylor Swift performs onstage during "Taylor Swift | The Eras Tour".


AI leaders, actors, and academics sign letter calling for anti-deepfake legislation
Pixelated human head as artificial intelligence, with a small human reaching towards it.

More in Tech
The internet is freaking out about reheated rice. Should you be worried?
A man reheating rice

CERN's Large Hadron Collider is looking for dark photons. But... why?
one of the LHC particle accelerator's tunnels


How Oppenheimer built an atomic bomb before the Nazis
An illustration of Oppenheimer


Trending on Mashable
NYT Connections today: See hints and answers for March 9
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 9
a phone displaying Wordle

NYT Connections today: See hints and answers for March 8
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 8
a phone displaying Wordle

NYT's The Mini crossword answers for March 8
Closeup view of crossword puzzle clues
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!