One of the provisions in the “big, beautiful” budget bill would ban states from regulating AI for the next 10 years.
Related Articles
Magid: In search of a perfect robotic vacuum cleaner, mop Magid: How to keep your phone going all day Magid: Earbuds to help you sleep Magid: A projector worthy of outdoor movies Magid: Dr. AI? Not quite but still helpfulThe bill, which narrowly passed the House by a single vote (215–214), includes a clause stating: “No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act,” which means the provision would take effect immediately upon being signed.
The entire budget bill is now being considered by the Senate, which could eliminate or modify this section.
I’m a big fan of generative AI, but I also recognize that any powerful technology comes with risks and unintended consequences. Just as we have laws to regulate airlines, vehicles, and food and drugs, we also need thoughtful oversight of AI.
Skepticism about state internet laws
As someone who has closely followed internet regulation since the early 1990s, I’ve often been critical of state-level legislation. Not just because of what some of these bills attempt to do, but because they risk creating a patchwork of conflicting laws that are difficult for companies to navigate. It’s one thing to regulate activity that occurs entirely within a state’s borders, but quite another to try to govern a “product” that inherently transcends both state and national boundaries.
Although I prefer thoughtful federal legislation to state-level internet controls, I recognize that the federal government is often very slow in enacting consumer protection laws. I love our system of government, but even under normal circumstances, it’s not easy to get consensus in a country as large and diverse as ours, and it’s especially difficult in today’s highly polarized political climate.
In an ideal world, the federal government would take the lead in regulating AI. But given the current Congress and White House, that’s unlikely to happen anytime soon. In the meantime, it’s often state and local governments that fill the gap in protecting consumers.
Could ban a California medical disclosure law
If the Senate passes and the president signs the bill with this provision, it will not only curtail future legislation but prevent states from enforcing laws that are already on the books. For example, last year both houses of California’s legislature unanimously passed the “Health care services: artificial intelligence act” (AB 3030), which requires health care providers to “include both a disclaimer that indicates to the patient that a communication was generated by generative artificial intelligence” and “clear instructions describing how a patient may contact a human health care provider, employee, or other appropriate person.”
I love that my health care provider uses audio recording and AI to generate detailed reports after each visit with my primary care physician. But the first time I saw one on my patient portal, I was puzzled by how comprehensive it was — and amazed that my doctor could recall everything we had discussed. Only after doing a bit of research did I learn that the report was generated by AI using Microsoft’s DAX Copilot ambient-listening technology. Patients shouldn’t have to be internet sleuths to get such a basic disclosure, but the budget bill could render the requirement unenforceable.
Tennessee could be “All Shook Up” over the provision
There are plenty of other state AI regulation laws already on the books or under consideration across the country, including the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which was signed into law by Tennessee Gov. Bill Lee last March after unanimous passage by the state’s overwhelmingly Republican legislature. If the U.S. Senate passes the budget bill with this provision, ELVIS will have “left the building.”
Sen. Marsha Blackburn (R-TN) has expressed opposition to the AI clause in the budget bill. “We certainly know that in Tennessee we need those protections,” she said during a hearing, “And until we pass something that is federally preemptive, we can’t call for a moratorium.”
‘Take it Down’ law is a positive step
Once in a while we do see some helpful federal internet laws passed with overwhelming bi-partisan support. A recent example is the Take It Down Act, which passed the Senate unanimously and the House with a 409–2 vote and was signed by President Trump on May 19.
The law makes it a federal offense to knowingly share or threaten to share intimate images without the subject’s consent, covering both real and AI-generated content, which strikes me as a good example of common-sense legislation.
However, the bill wasn’t entirely without controversy. The Electronic Frontier Foundation, for example, fears that it “pressures platforms to actively monitor speech, including speech that is presently encrypted,” and “thus presents a huge threat to security and privacy online.” The law doesn’t explicitly mention encryption, but it does require platforms to take “reasonable steps” to prevent the reappearance of material that has been taken down.
Congressman Thomas Massie (R-KY) was one of only two no votes, posting on X that it’s “a slippery slope, ripe for abuse, with unintended consequences.”
I, too, worry about abuse and unintended consequences but, on balance, agree that this legislation is needed to protect minors and adults from a form of sexual exploitation, and I would argue that bypassing encryption is not a “reasonable step.”
It’s in the public interest for both federal and state legislators to find ways to protect people from potential abuse and unintended consequences of today’s technologies, and it is generally best to do so on a federal level in coordination with other countries when it comes to regulating global platforms. And, although I understand the value of federal laws preempting state laws and setting a “floor” for basic protections, I don’t want to see states prevented from passing constitutionally valid laws to protect their own citizens.
And speaking of AI disclosure, I used ChatGPT to help find sources for this article, but I verified all the facts and did my own writing.
Related Articles
US Supreme Court limits environmental review of major infrastructure projects Home prices take 1st drop in 26 months, by this math Lender grabs control of big South Bay office building over failed loan Tech complex is eyed in San Jose on site of failed housing project Salesforce says AI has reduced hiring of engineers and customer service workersLarry Magid is a tech journalist and internet safety activist. Contact him at [email protected].
Read More Details
Finally We wish PressBee provided you with enough information of ( Larry Magid: Budget bill clause could ban state AI regulations )
Also on site :
- 'Top Chef' Season 22 Episode 12 Elimination Results: Who Was Sent Home This Week?
- Drowning’s the No. 1 killer of young children — from Orange to Pasadena, experts are trying to change that
- NYT Mini Crossword Answers, Hints for Friday, May 30, 2025