Friday, September 12, 2025
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
No Result
View All Result
Home Trending

What is the worst-case scenario for AI? California lawmakers want to know.

September 12, 2025
in Trending
Reading Time: 12 mins read
0 0
A A
0
What is the worst-case scenario for AI? California lawmakers want to know.
Share on FacebookShare on Twitter


When it comes to AI, as California goes, so goes the nation. The biggest state in the US by population is also the central hub of AI innovation for the entire globe, home to 32 of the world’s top 50 AI companies. That size and influence have given the Golden State the weight to become a regulatory trailblazer, setting the tone for the rest of the country on environmental, labor, and consumer protection regulations — and more recently, AI as well. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a limited window of opportunity to set the stage for the rest of the country’s AI laws.

This week, the California State Assembly is set to vote on SB 53, a bill that would require transparency reports from the developers of highly powerful, “frontier” AI models. The models targeted represent the cutting-edge of AI — extremely adept generative systems that require massive amounts of data and computing power, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The bill, which has already passed the state Senate, must pass the California State Assembly before it goes to the governor to either be vetoed or signed into law.

AI can offer tremendous benefits, but as the bill is meant to address, it’s not without risks. And while there is no shortage of existing risks from issues like job displacement and bias, SB 53 focuses on possible “catastrophic risks” from AI. Such risks include AI-enabled biological weapons attacks and rogue systems carrying out cyberattacks or other criminal activity that could conceivably bring down critical infrastructure. Such catastrophic risks represent widespread disasters that could plausibly threaten human civilization at local, national, and global levels. They represent risks of the kind of AI-driven disasters that have not yet occurred, rather than already-realized, more personal harms like AI deepfakes.

Exactly what constitutes a catastrophic risk is up for debate, but SB 53 defines it as a “foreseeable and material risk” of an event that causes more than 50 casualties or over $1 billion in damages that a frontier model plays a meaningful role in contributing to. How fault is determined in practice would be up to the courts to interpret. It’s hard to define catastrophic risk in law when the definition is far from settled, but doing so can help us protect against both near- and long-term consequences.

By itself, a single state bill focused on increased transparency will probably not be enough to prevent devastating cyberattacks and AI-enabled chemical, biological, radiological, and nuclear weapons. But the bill represents an effort to regulate this fast-moving technology before it outpaces our efforts at oversight.

SB 32 is the third state-level bill to try to specifically focus on regulating AI’s catastrophic risks, after California’s SB 1047, which passed the legislature only to be vetoed by the governor — and New York’s Responsible AI Safety and Education (RAISE) Act, which recently passed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.

SB 53, which was introduced by state Sen. Scott Wiener in February, requires frontier AI companies to develop safety frameworks that specifically detail how they approach catastrophic risk reduction. Before deploying their models, companies would have to publish safety and security reports. The bill also gives them 15 days to report “critical safety incidents” to the California Office of Emergency Services, and establishes whistleblower protections for employees who come forward about unsafe model deployment that contributes to catastrophic risk. SB 53 aims to hold companies publicly accountable for their AI safety commitments, with a financial penalty up to $1 million per violation.

In many ways, SB 53 is the spiritual successor to SB 1047, also introduced by Wiener.

Both cover large models that are trained at 10^26 FLOPS, a measurement of very significant computing power used in a variety of AI legislation as a threshold for significant risk, and both bills strengthen whistleblower protections. Where SB 53 departs from SB 1047 is its focus on transparency and prevention

While SB 1047 aimed to hold companies liable for catastrophic harms caused by their AI systems, SB 53 formalizes sharing safety frameworks, which many frontier AI companies, including Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its rules applying only to companies that generate $500 million or more in gross revenue.

“The science of how to make AI safe is rapidly evolving, and it’s currently difficult for policymakers to write prescriptive technical rules for how companies should manage safety,” said Thomas Woodside, the co-founder of Secure AI Project, an advocacy group that aims to reduce extreme risks from AI and is a sponsor of the bill, over email. “This light touch policy prevents backsliding on commitments and encourages a race to the top rather than a race to the bottom.”

Part of the logic of SB 53 is the ability to adapt the framework as AI progresses. The bill authorizes the California Attorney General to change the definition of a large developer after January 1, 2027, in response to AI advances.

Proponents of the bill are optimistic about its chances of being signed by the governor should it pass the legislature, which it is expected to. On the same day that Gov. Gavin Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier models. The resulting report by the group provided the foundation for SB 53. “I would guess, with roughly 75 percent confidence, that SB 53 will be signed into law by the end of September,” said Dean Ball — former White House AI policy adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.

But several industry organizations have rallied in opposition, arguing that additional compliance regulation would be expensive, given that AI companies should already be incentivized to avoid catastrophic harms. OpenAI has lobbied against it and technology trade group Chamber of Progress argues that the bill would require companies to file unnecessary paperwork and unnecessarily stifle innovation.

“Those compliance costs are merely the beginning,” Neil Chilson, head of AI policy at the Abundance Institute, told me over email. “The bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex.”

By contrast, Anthropic enthusiastically endorsed the bill in its current state on Monday. “The question isn’t whether we need AI governance – it’s whether we develop it thoughtfully today or reactively tomorrow,” the company explained in a blog post. “SB 53 offers a solid path toward the former.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, while Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. Neither organization has editorial input into our content.)

The debate over SB 53 ties into broader disagreements about whether states or the federal government should drive AI safety regulation. But since the vast majority of these companies are based in California, and nearly all do business there, the state’s legislation matters for the entire country.

“A federally led transparency approach is far, far, far preferable to the multi-state alternative,” where a patchwork of state regulations can conflict with each other, said Cato Institute technology policy fellow Matthew Mittelsteadt in an email. But “I love that the bill has a provision that would allow companies to defer to a future alternative federal standard.”

“The natural question is whether a federal approach can even happen,” Mittelsteadt continued. “In my opinion, the jury is out on that but the possibility is far more likely that some suggest. It’s been less than 3 years since ChatGPT was released. That is hardly a lifetime in public policy.”

But in a time of federal gridlock, frontier AI advancements won’t wait for Washington.

The catastrophic risk divide

The bill’s focus on, and framing of, catastrophic risks is not without controversy.

The idea of catastrophic risk comes from the fields of philosophy and quantitative risk assessment. Catastrophic risks are downstream of existential risks, which threaten humanity’s actual survival or else permanently reduce our potential as a species. The hope is that if these doomsday scenarios are identified and prepared for, they can be prevented or at least mitigated.

But if existential risks are clear — the end of the world, or at least as we know it — what falls under the catastrophic risk umbrella, and the best way to prioritize those risks, depends on who you ask. There are longtermists, people focused primarily on humanity’s far future, who place a premium on things like multiplanetary expansion for human survival. They’re often chiefly concerned by risks from rogue AI or extremely lethal pandemics. Neartermists are more preoccupied with existing risks, like climate change, mosquito vector-borne disease, or algorithmic bias. These camps can blend into one another — neartermists would also like to avoid getting hit by asteroids that could wipe out a city, and longtermists don’t dismiss risks like climate change — and the best way to think of them is like two ends of a spectrum rather than a strict binary.

You can think of the AI ethics and AI safety frameworks as the near- and longtermism of AI risk, respectively. AI ethics is about the moral implications of the ways the technology is deployed, including things like algorithmic bias and human rights, in the present. AI safety focuses on catastrophic risks and potential existential threats. But, as Vox’s Julia Longoria reported in the Good Robot series for Unexplainable, there are inter-personal conflicts leading these two factions to work against each other, much of which has to do with emphasis. (AI ethics people argue that catastrophic risk concerns over-hype AI capabilities and ignores its impact on vulnerable people right now, while AI safety people worry that if we focus too much on the present, we won’t have ways to mitigate larger-scale problems down the line.)

But behind the question of near versus long-term risks lies another one: what, exactly, constitutes a catastrophic risk?

SB 53 initially set the standard for catastrophic risk at 100 rather than 50 casualties — similar to New York’s RAISE Act — before halving the threshold in an amendment to the bill. While the average person might consider, say, many people driven to suicide after interacting with AI chatbots to be catastrophic, such a risk is outside of the bill’s scope. (The California State Assembly just passed a separate bill to regulate AI companion chatbots by preventing them from participating in discussions about suicidal ideation or sexually explicit material.)

SB 53 focuses squarely on harms from “expert-level” frontier AI model assistance in developing or deploying chemical, biological, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “loss of control” scenarios where AIs go rogue, behaving deceptively to avoid being shut down and replicating themselves without human oversight. For example, an AI model could be used to guide the creation of a new deadly virus that infects millions and kneecaps the global economy.

“The 50 to 100 deaths or a billion dollars in property damage is just a proxy to capture really widespread and substantial impact,” said Scott Singer, lead author of the California Report for Frontier AI Policy, which helped inform the basis of the bill. “We do look at like AI-enabled or AI potentially [caused] or correlated suicide. I think that’s like a very serious set of issues that demands policymaker attention, but I don’t think it’s the core of what this bill is trying to address.”

Transparency is helpful in preventing such catastrophes because it can help raise the alarm before things get out of hand, allowing AI developers to correct course. And in the event that such efforts fail to prevent a mass casualty incident, enhanced safety transparency can help law enforcement and the courts figure out what went wrong. The challenge there is that it can be difficult to determine how much a model is accountable for a specific outcome, Irene Solaiman, the chief policy officer at Hugging Face, a collaboration platform for AI developers, told me over email.

“These risks are coming and we should be ready for them and have transparency into what the companies are doing,” said Adam Billen, the vice president of public policy at Encode, an organization that advocates for responsible AI leadership and safety. (Encode is another sponsor of SB 53.) “But we don’t know exactly what we’re going to need to do once the risks themselves appear. But right now, when those things aren’t happening at a large scale, it makes sense to be sort of focused on transparency.”

However, a transparency-focused bill like SB 53 is insufficient for addressing already-existing harms. When we already know something is a problem, the focus should be on mitigating it.

“Maybe four years ago, if we had passed some sort of transparency legislation like SB 53 but focused on those harms, we might have had some warning signs and been able to intervene before the widespread harms to kids started happening,” Billen said. “We’re trying to kind of correct that mistake on these problems and get some sort of forward-facing information about what’s happening before things get crazy, basically.”

SB 53 risks being both overly narrow and unclearly scoped. We have not yet faced these catastrophic harms from frontier AI models, and the most devastating risks might take us entirely by surprise. We don’t know what we don’t know.

It’s also certainly possible that models trained below 10^26 FLOPS, which aren’t covered by SB 53, have the potential to cause catastrophic harm under the bill’s definition. The EU AI Act sets the threshold for “systemic risk” at the smaller 10^25 FLOPS, and there’s disagreement about the utility of computational power as a regulatory standard at all, especially as models become more efficient.

As it stands right now, SB 53 occupies a different niche from bills focused on regulating AI use in mental healthcare or data privacy, reflecting its authors’ desire not to step on the toes of other legislation or bite off more than it can reasonably chew. But Chilson, the Abundance Institute’s head of AI policy, is part of a camp that sees SB 53’s focus on catastrophic harm as a “distraction” from the real near-term benefits and concerns, like AI’s potential to accelerate the pace of scientific research or create nonconsensual deepfake imagery, respectively.

That said, deepfakes could certainly cause catastrophic harm. For instance, imagine a hyper-realistic deepfake impersonating a bank employee to commit fraud at a multibillion-dollar scale, said Nathan Calvin, the vice president of state affairs and general counsel at Encode. “I do think some of the lines between these things in practice can be a bit blurry, and I think in some ways…that is not necessarily a bad thing,” he told me.

It could be that the ideological debate around what qualifies as catastrophic risks, and whether that’s worthy of our legislative attention, is just noise. The bill is intended to regulate AI before the proverbial horse is out of the barn. The average person isn’t going to worry about the likelihood of AI sparking nuclear warfare or biological weapons attacks, but they do think about how algorithmic bias might affect their lives in the present. But in trying to prevent the worst-case scenarios, perhaps we can also avoid the “smaller,” nearer harms. If they’re effective, forward-facing safety provisions designed to prevent mass casualty events will also make AI safer for individuals.

If SB 53 passes the legislature and gets signed by Gov. Newsom into law, it could inspire other state attempts at AI regulation through a similar framework, and eventually encourage federal AI safety legislation to move forward.

How we think about risk matters because it determines where we focus our efforts on prevention. I’m a firm believer in the value of defining your terms, in law and debate. If we’re not on the same page about what we mean when we talk about risk, we can’t have a real conversation.

You’ve read 1 article in the last month

Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

Swati Sharma

Vox Editor-in-Chief



Source link

Tags: Artificial IntelligenceCaliforniaFuture PerfectInnovationlawmakersPolicyscenarioTech policyTechnologyworstcase
Previous Post

Mass Deportations Ensnare Immigrant Service Members and Veterans

Next Post

Trump Announcing The Capture Of Charlie Kirk’s Shooter On Fox News Is A Massive Problem

Related Posts

Tyler Robinson Identified As Charlie Kirk’s Alleged Killer
Trending

Tyler Robinson Identified As Charlie Kirk’s Alleged Killer

September 12, 2025
Republican Calls Liberals ‘Pure Evil’ On House Floor
Trending

Republican Calls Liberals ‘Pure Evil’ On House Floor

September 11, 2025
Bolsonaro convicted of coup attempt by Brazil’s Supreme Court
Trending

Bolsonaro convicted of coup attempt by Brazil’s Supreme Court

September 11, 2025
Donald Trump is lying about political violence
Trending

Donald Trump is lying about political violence

September 11, 2025
After Charlie Kirk’s murder, what lies ahead could be terrible
Trending

After Charlie Kirk’s murder, what lies ahead could be terrible

September 11, 2025
A ‘Win’ For Lachlan Murdoch Is A Loss For Journalism
Trending

A ‘Win’ For Lachlan Murdoch Is A Loss For Journalism

September 11, 2025
Next Post
Trump Announcing The Capture Of Charlie Kirk’s Shooter On Fox News Is A Massive Problem

Trump Announcing The Capture Of Charlie Kirk's Shooter On Fox News Is A Massive Problem

Tyler Robinson Identified As Charlie Kirk’s Alleged Killer

Tyler Robinson Identified As Charlie Kirk's Alleged Killer

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
White Nationalist Struggles With Whether Cubans Can Be American

White Nationalist Struggles With Whether Cubans Can Be American

July 29, 2025
“Chasing relevance”: Maron sounds off on “desperate” Maher

“Chasing relevance”: Maron sounds off on “desperate” Maher

August 25, 2025
How commerce became our most powerful tool against global poverty

How commerce became our most powerful tool against global poverty

April 12, 2025
Trump wants the stars to shine for him

Trump wants the stars to shine for him

August 7, 2025
Clyburn blasts GOP proposal to oust him from Congress

Clyburn blasts GOP proposal to oust him from Congress

August 7, 2025
Israel’s Gaza policy is viciously cruel — and strategically disastrous

Israel’s Gaza policy is viciously cruel — and strategically disastrous

August 7, 2025
“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

0
The Hawaii senator who faced down racism and ableism—and killed Nazis

The Hawaii senator who faced down racism and ableism—and killed Nazis

0
The murder rate fell at the fastest-ever pace last year—and it’s still falling

The murder rate fell at the fastest-ever pace last year—and it’s still falling

0
Trump used the site of the first assassination attempt to spew falsehoods

Trump used the site of the first assassination attempt to spew falsehoods

0
MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

0
Tens of thousands are dying on the disability wait list

Tens of thousands are dying on the disability wait list

0
Tyler Robinson Identified As Charlie Kirk’s Alleged Killer

Tyler Robinson Identified As Charlie Kirk’s Alleged Killer

September 12, 2025
Trump Announcing The Capture Of Charlie Kirk’s Shooter On Fox News Is A Massive Problem

Trump Announcing The Capture Of Charlie Kirk’s Shooter On Fox News Is A Massive Problem

September 12, 2025
What is the worst-case scenario for AI? California lawmakers want to know.

What is the worst-case scenario for AI? California lawmakers want to know.

September 12, 2025
Mass Deportations Ensnare Immigrant Service Members and Veterans

Mass Deportations Ensnare Immigrant Service Members and Veterans

September 12, 2025
Republican Calls Liberals ‘Pure Evil’ On House Floor

Republican Calls Liberals ‘Pure Evil’ On House Floor

September 11, 2025
Bolsonaro convicted of coup attempt by Brazil’s Supreme Court

Bolsonaro convicted of coup attempt by Brazil’s Supreme Court

September 11, 2025
Smart Again

Stay informed with Smart Again, the go-to news source for liberal perspectives and in-depth analysis on politics, social justice, and more. Join us in making news smart again.

CATEGORIES

  • Community
  • Law & Defense
  • Politics
  • Trending
  • Uncategorized
No Result
View All Result

LATEST UPDATES

  • Tyler Robinson Identified As Charlie Kirk’s Alleged Killer
  • Trump Announcing The Capture Of Charlie Kirk’s Shooter On Fox News Is A Massive Problem
  • What is the worst-case scenario for AI? California lawmakers want to know.
  • About Us
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Go to mobile version