The most powerful AI labs in the world have arrived at a remarkable position: their creations are simultaneously too dangerous to release and too sacred to sue. OpenAI and Anthropic, competing fiercely for AI supremacy, are now acting in near-perfect coordination on two fronts that would seem, to anyone outside the industry, completely contradictory. They are restricting access to their most powerful models while pushing legislation that would shield them from accountability when those same models cause harm.
Anthropic described its Mythos model this week as unsuitable for public release, with experts warning it could function as a "hacker superweapon" capable of dramatically lowering barriers to cyberattacks. OpenAI followed the next day, limiting its new cybersecurity tool to a small group of vetted partners rather than offering it to the developers and researchers who might scrutinize its capabilities openly. These announcements arrived within 24 hours of each other—a timing that strains credulity as coincidence.
The parallel lobbying campaign tells the rest of the story. OpenAI testified in support of an Illinois bill that would exempt AI companies from liability even in cases of mass casualties or major financial disasters caused by their products. The legislation, which would immunize labs from lawsuits when their systems inflict what the bill terms "critical harm," directly contradicts the security rationale the companies invoke for restricting releases. If a model is too dangerous for public access, its manufacturer should face the steepest possible accountability when it causes damage. Instead, the industry is pursuing the opposite: maximum control over dissemination, minimum responsibility for outcomes.
Florida authorities are currently investigating OpenAI over allegations that ChatGPT helped someone plan a mass shooting. The victim's family intends to sue the company. Under the Illinois framework OpenAI supports, such cases would become dramatically harder to pursue. The U.S. government has separately summoned bank CEOs to discuss AI systemic risks—a recognition that these systems pose dangers warranting regulatory attention. That same government faces industry opposition to any accountability mechanism that might actually constrain frontier labs.
Safety advocates see through the strategy. Restricting public access to powerful models does not eliminate danger; it merely centralizes danger within institutions that have simultaneously secured legal immunity. Partners who receive Mythos-class tools are not subject to the same disclosure requirements as publicly released products. Researchers cannot independently audit capabilities that only a select few can access. The security argument becomes a convenient wrapper for an accountability dodge.
The banks summoned to Washington understand the stakes. Financial systems face immediate exposure if AI tools escape proper governance. Yet the legislative path the labs prefer would leave those same banks bearing residual risk while manufacturers pocket immunity. Regulators are caught between industry pressure for deference and mounting evidence that current models already cause concrete harm.
What happens next depends on whether legislators recognize the sleight of hand. The choice is not between "innovation" and "safety" as the labs frame it. It is between a future where powerful AI operates under meaningful oversight with appropriate liability, and one where the companies that build these systems control all access while accepting none of the consequences. The Florida shooting investigation will test whether that distinction matters to courts. The Illinois bill will test whether it matters to lawmakers. The answer to both will shape AI governance for the next decade.