Policy Synthesized from 1 source

Judge Blocks Pentagon Blacklist of Anthropic

Key Points

  • Judge Rita Lin blocked Pentagon from labeling Anthropic a supply chain risk
  • Hegseth skipped required procedures including congressional notification
  • Pentagon used Anthropic's AI through Palantir for over a year without complaint
  • Government admitted it had no evidence for 'kill switch' claim
  • Seven days to appeal; second case challenging designation still pending
References (1)
  1. [1] Judge blocks Pentagon's Anthropic designation — MIT Technology Review AI

The Pentagon wanted to ban Anthropic from government systems. But the same government had been using Anthropic's AI through a Palantir partnership for over a year.

That contradiction sits at the heart of a California judge's ruling last Thursday, who temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering agencies to stop using its Claude AI. Judge Rita Lin's 43-page opinion exposed something the administration may not have anticipated: its own paper trail.

The government's case began unraveling almost immediately. For most of 2025, defense employees accessed Anthropic's Claude through Palantir under terms that cofounder Jared Kaplan said "prohibited mass surveillance of Americans and lethal autonomous warfare." No complaints. No concerns about kill switches. No national security risks. The feud only started when the government sought a direct contract—and encountered a company unwilling to compromise on its ethical constraints.

What followed was a familiar pattern: tweet first, lawyer later. President Trump's February 27 post on Truth Social directed every federal agency to stop using Anthropic, calling its staff "Leftwing nutjobs." Defense Secretary Pete Hegseth soon announced he'd label Anthropic a supply chain risk. But the designation required specific procedural steps—consultation with congressional committees, documented evaluation of alternatives, evidence of actual harm—and Lin found the administration bypassed every one. Letters to Congress admitted less severe measures were "evaluated and deemed not possible" without explaining why. When pressed on the administration's core claim that Anthropic could deploy a "kill switch," officials conceded they had no evidence supporting it.

This preliminary injunction represents the first legal crack in the Trump administration's campaign to blacklist frontier AI companies as national security risks. Whether it holds depends on the administration's next move. The government has seven days to appeal, and a second case challenging the designation itself remains pending.

Lin made clear that skipping process comes with consequences. Her opinion suggests what is fundamentally a contract dispute never needed to become a constitutional showdown—until officials decided they wanted one.

0:00