WASHINGTON: U.S. President Donald Trump has signed a landmark bill into law that criminalizes the non-consensual sharing of real or AI-generated intimate images, marking a significant federal step against the misuse of deepfake technology.
The “Take It Down Act,” passed with strong bipartisan support in Congress, targets the growing use of artificial intelligence to create and distribute sexually explicit content without a person’s consent. Individuals found guilty of intentionally sharing such material could face up to three years in prison.
“With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,” President Trump said during the Rose Garden signing ceremony.
“Today, we’re making it totally illegal. Anyone who intentionally distributes explicit images without the subject’s consent will face real consequences.”
A Victory for Online Safety
The law not only criminalizes the non-consensual publication of explicit material — whether real or synthetically generated — but also mandates that platforms implement clear procedures for the swift removal of such content once victims report it.
First Lady Melania Trump, who publicly endorsed the bill in March, attended the signing in a rare public appearance at the White House. She hailed the legislation as a major win for families.
“This is a national victory that will help parents and families protect children from online exploitation,” she said.
“It’s a powerful step forward in ensuring that every American, especially young people, can feel protected from having their image or identity abused.”
Deepfakes and Digital Harassment on the Rise
Deepfakes — digitally altered images or videos often created using AI — have become increasingly accessible through apps and online tools, enabling the creation of hyper-realistic but entirely fabricated content. These tools have been widely misused, particularly to target women.
While celebrities and public figures like Taylor Swift have been victims, experts warn that ordinary individuals, including teenagers, are now being targeted in schools and communities, often by classmates or online trolls.
The resulting content has fueled incidents of harassment, bullying, blackmail, and severe mental health consequences.
Legal Landscape and Concerns
Several U.S. states, including California and Florida, already have laws targeting sexually explicit deepfakes. The Take It Down Act now establishes a federal framework, prompting platforms to act swiftly upon notification from victims.
However, the law is not without its critics. The Electronic Frontier Foundation (EFF) has voiced concerns that the legislation could be misused to suppress free speech.
“This gives the powerful a dangerous new route to manipulate platforms into removing lawful speech they simply don’t like,” the EFF warned in a statement.
Experts Weigh In
Renee Cummings, an AI ethicist and criminologist at the University of Virginia, described the new law as an important step but emphasized the need for strong enforcement.
“Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators, and real-time adaptability to emerging digital threats,” Cummings told AFP.
As AI technology continues to evolve rapidly, lawmakers face increasing pressure to stay ahead of its potential for abuse. The Take It Down Act represents a significant attempt to confront the challenges of digital exploitation in the AI era.

