On April 28, the House of Representatives passed the first major law tackling AI-induced harm: the Take It Down Act. The bipartisan bill, which also passed the Senate and which President Trump is expected to sign, criminalizes non-consensual deepfake porn and requires platforms to take down such material within 48 hours of being served notice. The bill aims to stop the scourge of AI-created illicit imagery that has exploded in the last few years along with the rapid improvement of AI tools.
[time-brightcove not-tgx=”true”]While some civil society groups have raised concerns about the bill, it has received wide support from leaders on both sides of the aisle, from the conservative think tank American Principles Project to the progressive nonprofit Public Citizen. To some advocates, the bill is a textbook example of how Congress should work: of lawmakers fielding concerns from impacted constituents, then coming together in an attempt to reduce further harm.
“This victory belongs first and foremost to the heroic survivors who shared their stories and the advocates who never gave up,” Senator Ted Cruz, who spearheaded the bill in the Senate, wrote in a statement to TIME. “By requiring social media companies to take down this abusive content quickly, we are sparing victims from repeated trauma and holding predators accountable.”
Here’s what the bill aims to achieve, and how it crossed many hurdles en route to becoming law.
Victimized teens
The Take It Down Act was borne out of the suffering—and then activism—of a handful of teenagers. In October 2023, 14-year-old Elliston Berry of Texas and 15-year-old Francesca Mani of New Jersey each learned that classmates had used AI software to fabricate nude images of them and female classmates.
The tools that had been used to humiliate them were relatively new: products of the generative AI boom in which virtually any image could be created with the click of a button. Pornographic and sometimes violent deepfake images of Taylor Swift and others soon spread across the internet.
When Berry and Mani each sought to remove the images and seek punishment for those that had created them, they found that both social media platforms and their school boards reacted with silence or indifference. “They just didn’t know what to do: they were like, this is all new territory,” says Berry’s mother, Anna Berry.
Anna Berry then reached out to Senator Ted Cruz’s office, which took up the cause and drafted legislation that became the Take It Down Act. Cruz, who has two teenage daughters, threw his political muscle behind the bill, including organizing a Senate field hearing with testimony from both Elliston Berry and Mani in Texas. Mani, who had spoken out about her experiences in New Jersey before connecting with Cruz’s office during its national push for legislation, says that Cruz spoke with her several times directly—and personally put in a call to a Snapchat executive asking them to remove her deepfakes from the platform.
Mani and Berry both spent hours talking with congressional offices and news outlets to spread awareness. Bipartisan support soon spread, including the sign-on of Democratic co-sponsors like Amy Klobuchar and Richard Blumenthal. Representatives Maria Salazar and Madeleine Dean led the House version of the bill.
Read More: Time 100 AI 2024: Francesca Mani
Political wrangling
Very few lawmakers disagreed with implementing protections around AI-created deepfake nudes. But translating that into law proved much harder, especially in a divided, contentious Congress. In December, lawmakers tried to slip the Take It Down Act into a bipartisan spending deal. But the larger deal was killed after Elon Musk and Donald Trump urged lawmakers to reject it.
In the Biden era, it seemed that the piece of deepfake legislation that stood the best chance of passing was the DEFIANCE Act, led by Democrats Dick Durbin and Alexandria Ocasio-Cortez. In January, however, Cruz was promoted to become the chair of the Senate Commerce Committee, giving him a major position of power to set agendas. His office rallied the support for Take it Down from a slew of different public interest groups. They also helped persuade tech companies to support the bill, which worked: Snapchat and Meta got behind it.
“Cruz put an unbelievable amount of muscle into this bill,” says Sunny Gandhi, vice president of political affairs at Encode, an AI-focused advocacy group that supported the bill. “They spent a lot of effort wrangling a lot of the companies to make sure that they wouldn’t be opposed, and getting leadership interested.”
Gandhi says that one of the key reasons why tech companies supported the bill was because it did not involve Section 230 of the Communications Act, an endlessly-debated law that protects platforms from civil liability for what is posted on them. The Take It Down Act, instead, draws its enforcement power from the “deceptive and unfair trade practices” mandate of the Federal Trade Commission.
“With anything involving Section 230, there’s a worry on the tech company side that you are slowly going to chip away at their protections,” Gandhi says. “Going through the FTC instead was a very novel approach that I think a lot of companies were okay with.”
The Senate version of the Take It Down Act passed unanimously in February. A few weeks later, Melania Trump threw her weight behind the bill, staging a press conference in D.C., with Berry, Mani, and other deepfake victims, marking Trump’s first solo public appearance since she resumed the role of First Lady. The campaign fit in with her main initiative from the first Trump administration: “Be Best,” which included a focus on online safety.
A Cruz spokesperson told TIME that Trump’s support was crucial towards the bill getting expedited in the House. “The biggest challenge with a lot of these bills is trying to secure priority and floor time,” they said. “It’s essential to have a push to focus priorities—and it happened quickly because of her.”
Support is broad, but concerns persist
While the bill passed both chambers easily and with bipartisan support, it weathered plenty of criticism on the way. Critics say that the bill is sloppily written, and that bad faith actors could flag almost anything as nonconsensual illicit imagery in order to get it scrubbed from the internet. They also say that Donald Trump could use it as a weapon, leaning on his power over the FTC to threaten critics. In February, 12 organizations including the Center for Democracy & Technology penned a letter to the Senate warning that the bill could lead to the “suppression of lawful speech.”
Critics question the bill’s effectiveness especially because it puts the FTC in charge of enforcement—and the federal agency has been severely weakened by the Trump administration. At a House markup in April, Democrats warned that a weakened FTC could struggle to keep up with take-down requests, rendering the bill toothless.
Regardless, Gandhi hopes that Congress will build upon Take It Down to create more safeguards for children online. The House Energy and Commerce Committee recently held a hearing on the subject, signaling increased interest. “There’s a giant movement in Congress and at the state level around kids’ safety that is only picking up momentum,” Gandhi says. “People don’t want this to be the next big harm that we wait five or 10 years before we do something about it.”
For Mani and Berry, the passage of Take It Down represents a major political, legal, and emotional victory. “For those of us who’ve been hurt, it’s a chance to take back our dignity,” Mani says.
Read More Details
Finally We wish PressBee provided you with enough information of ( Inside the First Major U.S. Bill Tackling AI Harms—and Deepfake Abuse )
Also on site :