Policy Synthesized from 1 source

Safety Claims Mask Competition as AI Labs Battle Illinois Liability Bill

Key Points

  • Illinois bill would exempt AI labs from liability for mass deaths and financial disasters
  • Anthropic opposes bill, arguing shields create perverse safety incentives
  • OpenAI supports bill, warning liability chills innovation and drives firms abroad
  • Both positions conveniently serve each company's competitive market position
  • Illinois trial lawyers launch counter-campaign defending victims' right to sue
  • Vote scheduled within three weeks; outcome will influence federal AI regulation
References (1)
  1. [1] Anthropic opposes AI liability bill backed by OpenAI — Wired AI

Can a company that builds AI systems "to be safe" simultaneously lobby for legal immunity when those systems cause mass harm? That question sits at the center of an increasingly bitter dispute between two of the most prominent AI laboratories in the United States.

Anthropic and OpenAI have found themselves on opposite sides of an Illinois bill that would shield AI developers from liability for mass casualties and large-scale financial disasters caused by their systems. The legislation, if passed, would mark one of the most sweeping liability exemptions for the technology sector in American history. Both companies claim their position serves the public interest. Neither position, upon closer examination, separates principle from competitive advantage.

Anthropic opposes the bill. The company, backed by Amazon and Google, has argued that broad liability shields would create perverse incentives, allowing labs to cut corners on safety testing with little consequence. "If you remove accountability, you remove the primary driver of caution," the company stated in its formal submission to the Illinois legislature. The argument sounds reasonable. It also happens to align with Anthropic's business model. The company's Claude models compete directly with OpenAI's flagship products, and Anthropic has positioned itself as the "safety-first" alternative. Strong liability rules that punish careless deployment would disproportionately burden the company's less-cautious rivals—rivals who are currently ahead in market share.

OpenAI supports the bill. The company argues that without liability protection, AI development will flee to less-regulated jurisdictions, leaving American companies unable to compete with Chinese labs that face no such restrictions. OpenAI has also warned that expansive liability could chill innovation entirely, as even well-designed systems might cause unforeseen harm. The reasoning is not without merit. It also conveniently serves OpenAI's position as the current market leader with the most to gain from a regulatory environment that is difficult for new entrants to navigate. Higher compliance costs disproportionately burden smaller competitors. OpenAI, with its $100 billion-plus valuation and close ties to Microsoft, can absorb regulatory friction that would crush earlier-stage startups.

The Illinois legislation would establish a legal framework unlike anything currently on the books in the United States. Under the proposed language, AI developers could not be held financially responsible for harms resulting from systems that met baseline safety certifications—even if those systems subsequently caused deaths or economic devastation on a massive scale. Supporters say this mirrors protections given to pharmaceutical companies during clinical trials. Critics counter that AI systems are fundamentally different: they are designed to act autonomously in unpredictable environments, and "baseline safety certification" is a term the bill does not adequately define.

Consumer advocates and trial lawyers have denounced both positions as corporate self-interest dressed in public-spirited language. "Anthropic wants rules that hurt OpenAI," said one consumer group representative who requested anonymity to speak candidly. "OpenAI wants rules that hurt everyone else. Neither of them is lobbying for you." The Illinois trial lawyers association has launched a counter-campaign, arguing that the bill would strip victims of any recourse when AI systems cause genuine harm.

The legislation is scheduled for a committee vote within three weeks. Neither company has disclosed the extent of its lobbying expenditure, though Illinois lobbying records show both Anthropic and OpenAI have retained prominent Springfield firms in recent months. The outcome will almost certainly influence how other states—and eventually the federal government—approach AI liability. What happens in Illinois will not stay in Illinois. The question is whether it will serve the public or the balance sheets of whichever company wins this fight.

0:00