
Comply with ZDNET: Add us as a most well-liked supply on Google.
ZDNET’s key takeaways
- Two main state AI legal guidelines are in impact as federal AI regulation stays unclear.
- Trump has renewed assaults on state AI laws.
- Researchers are nonetheless dissatisfied with present legal guidelines.
We’re a couple of weeks into 2026, and the Trump administration has but to suggest AI laws on the federal stage.
On the similar time, first-of-their-kind AI security legal guidelines in California and New York — each states well-positioned to affect tech corporations — have gone into impact. However a December govt order and its ensuing job drive have renewed assaults on state AI legal guidelines. What do the brand new legal guidelines imply in follow, and might they survive scrutiny on the federal stage?
Additionally: 5 methods guidelines and rules might help information your AI innovation
SB-53 and the RAISE Act
California SB-53, the brand new AI security legislation that went into impact on January 1, requires mannequin builders to publicize how they will mitigate the largest dangers posed by AI, and to report on security incidents involving their fashions (or face fines of as much as $1 million if they do not). Although not as thorough as beforehand tried laws within the state, the brand new legislation is virtually the one one in a extremely unregulated AI panorama. Most just lately, it was joined by the RAISE Act, handed in New York on the finish of December, which has similarities to the California legislation.
The RAISE Act, as compared, additionally lays out reporting necessities for security incidents involving fashions of all sizes, however has an higher advantageous threshold of $3 million after an organization’s first violation. Whereas SB 53 mandates that corporations notify the state inside 15 days of a security incident, RAISE requires notification inside 72 hours.
Additionally: The AI balancing act your organization cannot afford to fumble in 2026
SB 1047, an earlier model of SB 53, would have required AI labs to safety-test fashions costing over $100 million and to develop a shutdown mechanism, or kill change, to manage them ought to they misbehave. That invoice failed within the face of arguments that it could stifle job creation and innovation, a standard response to regulation efforts, particularly from the present administration.
SB 53 makes use of a lighter hand. Just like the RAISE Act, it targets corporations with gross annual income of greater than $500 million, a threshold that exempts many smaller AI startups from the legislation’s reporting and documentation necessities.
“It is attention-grabbing that there’s this income threshold, particularly since there was the introduction of a variety of leaner AI fashions that may nonetheless interact in a variety of processing, however could be deployed by smaller corporations,” knowledge safety lawyer Lily Li, who based Metaverse Legislation, advised ZDNET. She famous that Gov. Gavin Newsom vetoed SB-1047, partially, as a result of it could impose growth-inhibiting prices on smaller corporations, a priority additionally echoed by lobbying teams.
Additionally: Fearful AI will take your distant job? You are protected for now, this examine reveals
“I do assume it is extra politically motivated than essentially pushed by variations within the potential hurt or impression of AI primarily based on the dimensions of the corporate or the dimensions of the mannequin,” she stated of the edge.
In comparison with SB-1047, SB-53 focuses extra on transparency, documentation, and reporting than on precise hurt. The legislation creates necessities for guardrails round catastrophic dangers: cyber, chemical, organic, radiological, and nuclear weapon assaults, bodily hurt, assault, or conditions the place builders lose management of an AI system.
Renewed limits on state AI laws
The present administration and corporations arguing in opposition to AI security regulation, particularly on the federal stage, say it could sluggish improvement, hurt jobs within the tech sector, and cede floor within the AI race to nations like China.
On Dec. 11, President Trump signed an govt order stating a renewed intention to centralize AI legal guidelines on the federal stage to make sure US corporations are “free to innovate with out cumbersome regulation.” The order argues that “extreme State regulation thwarts this crucial” by making a patchwork of differing legal guidelines, a few of which it alleges “are more and more liable for requiring entities to embed ideological bias inside fashions.”
Additionally: Senate removes ban on state AI rules from Trump’s tax invoice
It additionally introduced an AI Litigation Activity Pressure to problem state legal guidelines which can be inconsistent with a “minimally burdensome nationwide coverage framework for AI.” This renews an try by Congress this previous summer season to ban states from passing AI rules for 10 years, which might have withheld broadband and AI infrastructure funds from states that didn’t comply. The moratorium was defeated in a landslide, briefly preserving states’ rights to legislate AI of their territory.
Final week, CBS Information reported that the Justice Division was forming the duty drive. An inner memo reviewed by CBS Information stated the duty drive will likely be led by both US Legal professional Normal Pam Bondi or one other appointee.
That stated, Li doesn’t count on the brand new job drive to considerably impression state regulation, no less than in California.
“The AI litigation job drive will deal with legal guidelines which can be unconstitutional beneath the dormant commerce clause and First Modification, preempted by federal legislation, or in any other case illegal,” she advised ZDNET. “The tenth Modification, nevertheless, explicitly reserves rights to the states if there isn’t any federal legislation, or if there isn’t any preemption of state legal guidelines by a federal legislation.”
Additionally: Is AI’s battle on busywork a creativity killer? What the consultants say
Moreover a new request for data (RFI) from the Heart for AI Requirements and Innovation (CAISI) — previously the AI Security Institute — concerning the safety dangers of AI brokers, the administration would not seem to have provided a alternative for state legal guidelines.
“Federal HIPAA necessities enable for states to move extra stringent state healthcare privateness legal guidelines,” Li stated. “Right here, there isn’t a federal AI legislation that will preempt lots of the state legal guidelines, and Congress has rebuffed prior efforts so as to add federal AI preemption to previous laws.”
Extra protections – and limits
California’s SB-53 additionally requires AI corporations to guard whistleblowers. This stood out to Li, who famous that, in contrast to different components of the legislation, that are mirrored within the EU Act and which many corporations are subsequently already ready for, whistleblower protections are distinctive in tech.
“There actually have not been a variety of instances within the AI area, clearly, as a result of it is new,” Li stated. “I feel that may be a larger concern for lots of tech corporations, as a result of there may be a lot turnover within the tech area, and you do not know what the market’s going to seem like. That is one thing else that corporations are fearful about as a part of the layoff course of.”
She added that SB 53’s reporting necessities make corporations extra involved about creating materials that might be utilized in class-action lawsuits.
Additionally: Why you will pay extra for AI in 2026, and three money-saving tricks to strive
Gideon Futerman, particular tasks affiliate on the Heart for AI Security, would not assume SB 53 will meaningfully impression security analysis.
“This may not change the day-to-day a lot, largely as a result of the EU AI Act already requires these disclosures,” he defined. “SB-53 would not impose any new burden.”
Neither legislation requires that AI labs have their fashions examined by third events, although New York’s RAISE Act does mandate annual third-party audits on the time of writing. Nonetheless, Futerman considers SB 53 progress.
“It reveals that AI security regulation is feasible and has political momentum. The quantity of actual security work taking place at present remains to be far beneath what is required,” he stated. “Corporations racing to construct superintelligent AI whereas admitting these techniques may pose extinction-level dangers nonetheless do not likely perceive how their fashions work.”
The place this leaves AI security
“SB-53’s stage of regulation is nothing in comparison with the hazards, however it’s a worthy first step on transparency and the primary enforcement round catastrophic danger within the US. That is the place we must always have been years in the past,” Futerman stated.
No matter state and federal rules, Li stated governance has already develop into the next precedence for AI corporations, pushed by their backside traces. Enterprise prospects are pushing legal responsibility onto builders, and traders are noting privateness, cybersecurity, and governance of their funding choices.
Additionally: 5 methods to develop your corporation with AI – with out sidelining your folks
Nonetheless, she stated that many corporations are simply flying beneath the radar of regulators whereas they will.
“Transparency alone would not make techniques protected, however it’s an important first step,” Futerman stated. He hopes future laws will fill remaining gaps within the nationwide safety technique.
“That features strengthening export controls and chip monitoring, bettering intelligence on frontier AI tasks overseas, and coordinating with different nations on the navy functions of AI to forestall unintended escalation,” he added.

























