The rapid rise of generative AI has built a multibillion-dollar industry virtually overnight, but panelists at South by Southwest said large AI companies face less oversight than the businesses lined up outside of the Austin, Texas, conference.
“There’s more regulation on a food truck than there is on this technology,” Emilia Javorsky, director of the Futures Program at the Future of Life Institute, said during a panel at SXSW on Monday.
“Part of that is because we understand the food truck more than we understand this technology,” noted Daniel Kokotajlo, a former safety researcher at ChatGPT maker OpenAI who left the company in 2024 to speak out about the need for additional safety and transparency in the industry.
The meteoric rise of artificial intelligence and the complexity inherent in the technology make it particularly difficult to craft rules and guardrails beyond the political considerations of getting something done.
But there are plenty of ideas, including one from Harvard law professor Lawrence Lessig: requiring AI developers to carry insurance.
“It’s really important that we understand regulation doesn’t destroy, it enables, if it’s smart,” Lessig said on stage at the panel.
The need for transparency
The idea of “trust” has been on the minds of many speakers on artificial intelligence at SXSW, with experts worrying not just about the science fiction-sounding potential of superintelligence, but also about the practical everyday implications of bias and errors in more rudimentary products like chatbots.
The question is how to ensure every AI developer is building that trust. AI safety is a topic that companies know they need to address. Just witness the pages dedicated to it at OpenAI, Google, Microsoft, Anthropic, Adobe and elsewhere.
Javorsky suggested a regulatory structure not unlike that around the development of prescription drugs: think a Food and Drug Administration for AI. Most of the resources used in developing drugs go toward proving they are safe, creating trust in those products once they make it to the market. When consumers feel a product is safe, they are less apprehensive about using it.
“Even from the perspective of someone who just wants to see AI flourish as an industry, safety is something you should care about,” Javorsky said.
Kokotajlo and Lessig have both pushed for whistleblower protections for workers at AI companies. Existing whistleblower laws protect workers who call out illegal activity, but many potentially dangerous activities involving new technologies like artificial intelligence are legal. There simply are not laws on the books for these kinds of things.
Lessig said the European Union’s AI regulations are too complicated and don’t force companies to do the right thing by default.
“I think instead we have to develop a serious conversation about the minimal viable product of regulation,” he said. “The minimal viable product is to build incentives inside of these companies to behave in the right way.”
Insurance as an AI regulation
Lessig said requiring AI developers to carry insurance would create market incentives for responsibility. Insurance companies would establish rates based on how risky the company products and behaviors are, meaning it would be more expensive to make products that lack sufficient guardrails.
Insurance coverage requirements exist in other contexts to put a price, essentially, on risky and irresponsible behavior. Think of how car insurance rates go up if you have a poor driving record and down if you have a safe driving history.
“It’ll be crude initially, and then you’ll have some insurance companies that come out and think about how to market price it differently,” Lessig told CNET after the panel. “That’s the kind of incentive we need to create, that kind of competitive influence to determine what exactly is the safety risk that they need to be ensuring for.”
An insurance requirement would not solve everything, Lessig said, but it would create some incentives for companies to develop robust and accountable safety requirements as they push for smarter and smarter AI systems.
“Insurance is, for me, the thing that’s addressing the basic runaway risk concern that this is going to cause huge damage to society,” he said.