Despite claiming to support AI safety, powerful tech interests are trying to kill SB1047.
Artificial intelligence could help us solve humanity’s greatest challenges. But, left unchecked, it could cause catastrophic harm. Well-designed regulation will allow us to harness AI’s potential, while securing us from its potential to do harm—not through bureaucrats’ specifying technical procedures, but through rules that ensure that companies embrace safe procedures.
California is on the brink of passing regulation to start to do just that. Yet, despite universal recognition among leading AI executives of the risks their work poses, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) has become the target of an extraordinary lobbying effort. The bill is said to be certain to stifle technical innovation in Silicon Valley and almost purposely designed to end “open-source” AI development.
Malarkey. This fight is less about “corporate capture” or the California legislature’s desire to kill its golden-egg-laying goose than it is about the same-as-it-ever-was power of money in American politics. If the bill fails—and next month will determine whether it does—it will signal yet again the loss of America’s capacity to address even the most significant threats.
Who’s Afraid of AI Safety?
At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.”
Which models? Initially, simplifying, models that cost $100,000,000 or more to train or models fine-tuned at a cost of $10,000,000 or more.
“Critical harm”? The law covers models that lead to the “creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties” or that lead to cyberattacks on critical infrastructure costing more than $500,000,000 or that, acting with limited human oversight, result in mass casualties or damage greater than $500,000,000.
Thus, the law covers an incredibly small number of model builders so as to avoid potentially huge harm. And the mandate that it deploys to avoid “critical harm” simply requires that companies adopt robust protocols to increase model safety. The law does not specify what those protocols must be. It simply requires the company, considering the state of the industry and entities advising the industry, to adopt rules to ensure that a small slice of their products are safe.
In some sense, every company developing models of this size would say it has already adopted such safety protocols. So then, why the opposition?
The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing.
But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver.
That companies would want to avoid regulation is not surprising. What is surprising is how awful the arguments against the bill have been—especially by people who should know better.
To start with, members of Congress have written to the bill’s sponsor, State Senator Scott Wiener, telling him, “The bill requires firms to adhere to voluntary guidance issued by industry and the National Institute of Standards and Technology, which does not yet exist.” That is simply not true. The bill simply requires developers to “consider industry best practices and applicable guidance” from organizations like NIST, which NIST has already begun to supply.
These representatives continue by saying that they object that the bill “is skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.” It is true, of course, that the bill is not focused on lots of other AI risks. California—and Congress!—ought to address those risks, too. Indeed, California alone introduced over 50 AI bills this year, many of which are targeted at those risks. Yet how that is an argument against addressing the risks the law is addressing is not clear. It’s like saying that a bill addressing wildfire risks should be rejected because it doesn’t address flooding risks.
But consider the term “hypothetical existential risks”: Many in the field of AI have spoken of the “existential risks” that advanced AI may present—“existential” in the sense that if they are realized, humanity is over. Those risks are not SB1047’s direct concern. Its focus is on more practical harms, such as cyberattacks on critical infrastructure or economic harm of $500M or more. Every AI company that is likely to train the models that this bill would regulate—including companies such as OpenAI, Google, and Meta that oppose it—believes, or says it believes, that its most powerful AI models might pose these sorts of risks in the not-too-distant future.
Regardless, how should we think about these severe risks more generally? Some believe such risks are unavoidable. They reject the term “hypothetical.” Some believe such risks don’t exist: Like time travel, they can be imagined, but they cannot be realized. Yet most speak of these risks in probabilities, for example, “a 10 percent chance in 10 years.”
It’s not clear which of these three possibilities these members of Congress mean. They write, “There is little scientific evidence of harm of ‘mass casualties or harmful weapons created’ from advanced models.”
However, no one is claiming that we have seen “mass casualties or harmful weapons created” so far. The point of the bill is to avoid such harm, especially as models become so enormously powerful. (There is also little scientific evidence of harm of mass casualties or harmful weapons created from bioengineering; is that a reason not to regulate bioengineering?) Sure, if you’re certain such risks could not be realized then there’s no reason for this bill. But when did members of Congress become experts in AI?
However, the representatives continue further by echoing shibboleths about open-source AI. “Currently, some advanced models are released as open source and made widely available,” they write. “This openness allows smaller, lesser-resourced companies and organizations, including universities, to develop on top of them, stimulating innovation and having large economic impact.”
That’s true enough. But then, as the members’ letter continues, truth begins to fade. They say, “This bill would reduce this practice [of open-source development] by holding the original developer of a model liable for a party misusing their technology downstream.”
Popular
“swipe left below to view more authors”Swipe →
Wrong. The bill creates no liability simply because someone “misus[ed]” a “technology downstream.” No doubt, it imposes upon “developers” of “covered models” the obligation to take “reasonable care to implement…appropriate measures to prevent covered models and covered model derivatives from posing unreasonable risks of causing or materially enabling critical harms”—again, “mass casualties” or economic loss exceeding $500M. Who is against that? What industry in America today, without explicit legislative exemption, is entitled to deploy, without regulation or liability, a product that creates “unreasonable risks of causing…critical harms”?
Open-source software developers have long used licenses to avoid economic liability for harms that flow from their software. For ordinary economic harm, that may well make sense. But the law of tort—independent of, but codified in important ways by, SB1047—is not bound by software licenses. In the face of “critical harms,” let alone existential risks, there is no good reason to exempt developers of software from the ordinary duty that everyone else bears: to take “reasonable care” to implement “appropriate measures” to avoid “unreasonable risk.”
What’s more, targeting the shutdown protocols that the law requires the companies to develop, the representatives write that “kill switches” “would decimate the ecosystems that spring up around [open-source] AI models.” No entrepreneur would want to build a product around an AI system if the developer could pull the plug at any time.
Again, this is wrong. First, the law does not require anyone to build a “kill switch.” It merely requires developers to have the ability to shut down their own software. Second, the law does not require anyone to enact that shutdown at any time. It only requires that the companies develop the capability and describe the protocols governing when they would be used. The rule is like a regulation requiring companies building electrical grids to include circuit breakers in their design. Do circuit breakers “decimate the ecosystems” of companies developing electrical products?
Third, the law requires “full shutdown” capability for models “controlled by” a developer. Once code is adopted and deployed by others, so long as those others are not “developers” of “covered models,” the obligation does not reach them. But fourth, and most bizarrely, imagine an open-source model did have a kill switch, and imagine the developer flipped it because a runaway model began to cause “critical harm”—again, “mass casualties” or economic harm of $500M or more. Are these members arguing that the developer should not flip the switch? Or that the entrepreneur using the model would rather cause “critical harm” than have its model stopped?
Indeed, the argument goes the other way around. Having circuit breakers built into the system makes it more likely that companies will develop products based on open-source technologies if only to avoid the ordinary tort liability that would follow any harm that those products would produce. A company building its product on top of an unreasonably dangerous product could itself face tort liability. The capability to stop runaway critical harm could make the underlying product more valuable to follow-on developers, not less.
Finally, the members of Congress write that a recent NIST report recommended that the “government should not restrict access to open-source models with widely available model weights at this time.” True, it shouldn’t, but nothing in SB1047 would. The bill does nothing to “restrict access” to models; it only requires that “developers” of “covered models” (again, those spending $100M or more) or covered “fine-tuned” models (again, those spending $10M or more), develop protocols to advance the safety of those models, at least to the extent reasonable, given the state of knowledge in the field.
A Simple First Step
SB1047 is a protocol bill. It mandates that a handful of companies take meaningful steps to adopt procedures that the leaders of every one of these companies agree such models could conceivably create.
The bill isn’t perfect. There are plenty of ways in which it could be improved. I agree with much in Anthropic’s balanced and insightful analysis of the bill—an analysis by one of the companies that would be regulated that yet concludes that the benefits of the bill outweigh the costs.
But every bill is imperfect. And this one has one more important argument in its favor: If Donald Trump is elected, he has promised to immediately remove even the minimal protections that the Biden administration has imposed. That would be catastrophic. SB1047 is not a substitute for those protections, but it is a backstop and a critical first step.
Can we count on you?
In the coming election, the fate of our democracy and fundamental civil rights are on the ballot. The conservative architects of Project 2025 are scheming to institutionalize Donald Trump’s authoritarian vision across all levels of government if he should win.
We’ve already seen events that fill us with both dread and cautious optimism—throughout it all, The Nation has been a bulwark against misinformation and an advocate for bold, principled perspectives. Our dedicated writers have sat down with Kamala Harris and Bernie Sanders for interviews, unpacked the shallow right-wing populist appeals of J.D. Vance, and debated the pathway for a Democratic victory in November.
Stories like these and the one you just read are vital at this critical juncture in our country’s history. Now more than ever, we need clear-eyed and deeply reported independent journalism to make sense of the headlines and sort fact from fiction. Donate today and join our 160-year legacy of speaking truth to power and uplifting the voices of grassroots advocates.
Throughout 2024 and what is likely the defining election of our lifetimes, we need your support to continue publishing the insightful journalism you rely on.
Thank you,
The Editors of The Nation
More from The Nation
Extreme heat has long been a concern for incarcerated pregnant women and those behind bars with underlying health conditions.
On February 27, 2013, just a few hours after Solicitor General Donald Verrilli endeavored to protect the Voting Rights Act before the US Supreme Court, he dropped by his boss’s off…
As always, the DNC was an endurance grind. But my serendipitous encounter with a woman who embodies Harris’s reproductive justice agenda was the high point for me.