“Does SB 1047…spell the end of the Californian technology industry?” Yann LeCun, the chief AI scientist at Meta and one of the so-called “godfathers” of the artificial intelligence boom, asked in June.
LeCun was echoing the panicked reaction of many in the tech community to SB 1047, a bill currently making its way through the California State Legislature. The legislation would create one of the country’s first regulatory regimes specifically designed for AI. SB 1047 passed the state Senate nearly unopposed and is currently awaiting a vote in the state Assembly. But it faces a barrage of attacks from some of Silicon Valley’s most influential players, who have framed it as nothing less than a death knell for the future of technological innovation.
But now they and their industry groups are saying it’s too soon to regulate. Or they want regulation, of course, but just not this regulation.
None of the major AI companies support SB 1047. Some, like Google and Meta, have taken unusually strong positions against it. Others are more circumspect, letting trade associations speak for them or requesting that the bill be watered down further. With such an array of powerful forces stacked against it, it’s worth looking at what exactly SB 1047 does and does not do. And when you do that, you find not only that the reality is very different from the rhetoric, but that some tech bigwigs are blatantly misleading the public about the nature of this legislation.
According to its critics, SB 1047 would be hellish for the tech industry. Among other things, detractors warn that the bill would make it legal to jail start-up founders for innocent paperwork mistakes; cede the US AI lead to China; and destroy open-source development. “Without open-source AI, there is no AI start-up ecosystem and no academic research on large models. Meta will be fine, but AI start-ups will just die. Seems pretty apocalyptic to me,” LeCun warned. To make matters worse, AI investors assert that the bill manifests “a fundamental misunderstanding of the technology” and that its creators haven’t been receptive to feedback.
But when you look past this hyperbole, you’ll find a radically different landscape. In reality, the actual bill is comprised of very popular provisions, crafted with extensive input from AI developers, and endorsed by world-leading AI researchers, including the two other people seen as godfathers of AI alongside LeCun. SB 1047’s primary author says it won’t do any of the aforementioned “apocalyptic” things its critics warn against, a claim echoed by OpenAI whistleblower Daniel Kokotajlo, who supports the bill and “predict[s] that if it passes, the stifling of AI progress that critics doomsay about will fail to materialize.”
Also unlikely to materialize is an AI exodus from the state. SB 1047 applies to anybody doing business in California—the world’s fifth-largest economy and its de facto AI headquarters.
According to SB 1047 author state Senator Scott Wiener, the heart of the bill requires a set of safety measures from developers of “covered models”—AI systems larger and more expensive than the most powerful existing ones. The legislation would require that these developers provide “reasonable assurance” that their models won’t cause catastrophic harms, defined as at least $500 million in damage or a mass-casualty event. Wiener says the other key provision is that developers must be able to shut down a covered model in case of an emergency.
Wiener is far from a burn-it-down leftist. He identifies as pro-AI, innovation, and open-source. A recent Politico profile describes Wiener as “a business-friendly moderate, by San Francisco standards” and includes criticism from the left for his “coziness” with tech.
Those relationships have not shielded Wiener from the tech industry’s wrath over the bill. All three of the leading AI developers—OpenAI, Anthropic, and Google—are part of TechNet, a trade group opposing the bill (members also include Amazon, Apple, and Meta).
OpenAI initially didn’t take a public position on the bill, but a company spokeswoman spoke out against it in a New York Times article on Wednesday. The Times reported that the company told Wiener that “serious A.I. risks were national security issues that should be regulated by the federal government, not by states.”
A Microsoft lobbyist told me the company’s officially neutral but also prefers a national law. TechNet and other industry associations argue that AI safety is already “appropriately being addressed at the federal level” and that we should wait for in-progress national AI safety standards. They fail to acknowledge that Republicans have promised to block meaningful federal legislation and reverse Biden’s executive order on AI, the closest thing to national AI regulation and the source of the forthcoming standards.
And as we’ll recall, Google and Meta have publicly opposed the bill.
The nearest thing to industry support has come from Anthropic, the most safety-oriented top AI company. Anthropic published a “support if amended” letter requesting extensive changes to the bill, the most significant of which is a move from what the company calls “broad pre-harm enforcement” to a requirement that developers create safety plans as they see fit. If a covered model causes a catastrophe and its creator’s safety plan “falls short of best practices or relevant standards, in a way that materially contributed to the catastrophe, then the developer should also share liability.” Anthropic calls this a “deterrence model” that would allow developers to flexibly set safety practices as standards evolve.
Wiener says he appreciates Anthropic’s detailed feedback and that the SB 1047 team is positive about the “bulk” of their proposals, but he’s reluctant to fully embrace the shift away from pre-harm enforcement.
A researcher at a top company wrote to me that their safety colleagues “seem broadly supportive” of SB 1047 and “annoyed with the Anthropic letter.”
Vox reported that Anthropic’s attempt to water down the bill “comes as a major disappointment to safety-focused groups, which expected Anthropic to welcome—not fight—more oversight and accountability.”
Anthropic was started by OpenAI employees after they failed to oust Sam Altman over safety concerns in 2021. Anthropic has since taken $6 billion in investment from Google and Amazon, the price of doing business in capital-intensive AI development.
These investments have an effect on company priorities, as Anthropic policy chief Jack Clark acknowledged to Vox last September, “I am pretty skeptical of things that relate to corporate governance because I think the incentives of corporations are horrendously warped, including ours.”
But by comparison, the reaction to the bill from the AI investor community makes Big Tech look downright responsible.
The most coordinated and intense opposition has been from Andreessen Horowitz, known as a16z. The world’s largest venture capital firm has shown itself willing to say anything to kill SB 1047. In open letters and the pages of the Financial Times and Fortune, a16z founders and partners in their portfolio have brazenly lied about what the bill does.
They say SB 1047 includes the “unobtainable requirement” that developers “certify that their AI models cannot be used to cause harm.” But the bill text clearly states, “‘Reasonable assurance’ does not mean full certainty or practical certainty.”
They claim that the emergency shutdown provision effectively kills open-source AI. However, Wiener says the provision was never intended to apply to open-sourced models and even amended the bill to make that clear.
The “godmother of AI,” Fei Fei Li, published an op-ed in Fortune parroting this and other a16z talking points. She wrote, “This kill switch will devastate the open-source community.” An open letter from academics in the University of California system echoes this unsupported claim.
A16z recently backed Li’s billon-dollar AI start-up—context that didn’t make into Fortune.
Popular
“swipe left below to view more authors”Swipe →
The most consistent and perhaps most preposterous narrative is that a16z is championing “little tech” against an overreaching government that’s unduly burdening “start-ups that are just getting off the ground.” But SB 1047 applies only to models that cost at least $100 million to train and use more computing power than any known model yet has.
So these start-ups will be wealthy enough to train unprecedentedly expensive and powerful models, but won’t be able to afford to conduct and report on basic safety practices? Would a16z be happy if start-ups in their portfolio didn’t have these plans in place?
Oh, and the champion of “little tech” neglects to mention that they are invested in OpenAI and Facebook (where a16z cofounder Marc Andreessen sits on the board).
SB 1047 has also acquired powerful enemies on Capitol Hill. The most dangerous might be Zoe Lofgren, the ranking Democrat in the House Committee on Science, Space, and Technology. Lofgren, whose district covers much of Silicon Valley, has taken hundreds of thousands of dollars from Big Tech and venture capital, and her daughter works on Google’s legal team. She has also stood in the way of previous regulatory efforts.
Lofgren recently took the unusual step of writing a letter against state-level legislation, arguing that SB 1047 was premature because “the science surrounding AI safety is still in its infancy.” Similarly, an industry lobbyist told me that “this is a rapidly evolving industry,” and that by comparison, “the airline industry has established best practices.”
The AI industry does move fast, and we do remain in the dark about the best ways to build powerful AI systems safely. But are those arguments against regulating it now?
This cautious, wait-and-see approach seems to extend only to their position on regulations. When it comes to building and deploying more powerful and autonomous AI systems, the companies see themselves in an all-out race.
In the West, self-regulation is the status quo. The only significant Western mandatory rules on general AI are included in the sweeping EU AI Act, but these don’t take effect until June 2025.
All the major AI companies have made voluntary commitments. But overall, compliance has been less than perfect.
The meltdown in response to SB 1047 is evidence of an industry that is “allergic to regulation because they’ve never been meaningfully regulated,” says Teri Olle, director of Economic Security California, a bill coauthor.
Opponents of SB 1047 are eager to frame it as a radical, industry-destroying measure driven by fears of an imminent sci-fi robot takeover. By shifting the conversation toward existential risk, they aim to distract from the bill’s specific provisions, which have garnered strong support in multiple statewide polls.
Representative Lofgren writes that the bill “seems heavily skewed toward addressing hypothetical existential risks.”
However, coauthors Wiener, Olle, and Sneha Revanur, founder and president of Encode Justice, all told me they were far more focused on catastrophic risks—a bar far below complete human extinction.
It’s true that no one really knows if AI systems could become powerful enough to kill or enslave every last person (though the heads of the leading AI companies and the most cited AI scientists have all said it’s a real possibility). But it’s very hard to simultaneously argue, as many tech boosters do, that AI will be as important as the industrial revolution, but also that there’s no risk that AI systems could enable catastrophes.
Three leading AI experts and a “founding figure” of Internet law published a letter endorsing the bill, arguing that “we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm.” These risks, they write, “could emerge within years, rather than decades” and are “probable and significant enough to make safety testing and common-sense precautions necessary.”
Wiener says he would prefer “one strong federal law,” but isn’t holding his breath. He notes that, aside from the TikTok ban, Congress hasn’t meaningfully regulated technology in decades. In the face of this inaction, California has passed its own laws on data privacy and net neutrality (Wiener authored the latter).
Given this, Olle says, “all eyes are on Sacramento and Brussels in the EU to really chart a path for how we should appropriately regulate AI and regulate tech.” She argues that SB 1047 is about more than just regulation—it’s about the question of “Who decides? Who decides what the safety standards are going to be for this very powerful technology?” She observes that, currently, these decisions are being made by a small group of people—so few that they could “fit in a minivan”—yet they’re making choices with “massive societal impact.”
Wiener represents San Francisco and, as a result, has borne a significant personal and political cost by shepherding SB 1047, says someone working on the bill: “You don’t have to love [Wiener] on everything to realize that he is just a stubborn motherfucker.… The amount of political pain he is taking on this is just unbelievable.… He has just lost a lot of relationships and political partners and people who are just incredibly furious at him over this. And I just think he actually thinks the risks are real and thinks that he has to do something about it.”
Opponents assert that there is a “massive public outcry” against SB 1047 and highlight imagined and unsubstantiated harms that will befall sympathetic victims like academics and open-source developers. However, the bill aims squarely at the largest AI developers in the world and has statewide popular support, with even stronger support from tech workers.
If you scratch the surface, the fault lines become clear: AI’s capitalists are defending their perceived material interests from a coalition of civil society groups, workers, and the broader public.
Note: this piece has been updated to include OpenAI’s opposition to the bill.
Can we count on you?
In the coming election, the fate of our democracy and fundamental civil rights are on the ballot. The conservative architects of Project 2025 are scheming to institutionalize Donald Trump’s authoritarian vision across all levels of government if he should win.
We’ve already seen events that fill us with both dread and cautious optimism—throughout it all, The Nation has been a bulwark against misinformation and an advocate for bold, principled perspectives. Our dedicated writers have sat down with Kamala Harris and Bernie Sanders for interviews, unpacked the shallow right-wing populist appeals of J.D. Vance, and debated the pathway for a Democratic victory in November.
Stories like these and the one you just read are vital at this critical juncture in our country’s history. Now more than ever, we need clear-eyed and deeply reported independent journalism to make sense of the headlines and sort fact from fiction. Donate today and join our 160-year legacy of speaking truth to power and uplifting the voices of grassroots advocates.
Throughout 2024 and what is likely the defining election of our lifetimes, we need your support to continue publishing the insightful journalism you rely on.
Thank you,
The Editors of The Nation
More from The Nation
Rural America looms large in the 2024 election. So why is it such a small piece of the Progressive Caucus agenda?
When Congress passed the Immigration Act of 1924 a century ago, The Nation issued a prescient warning to its readers.
A conversation with Rachel Somerstein about her new book, Invisible Labor: The Untold Story of the Cesarean Section.