Pentagon Expands AI Training to Classified Data
The Pentagon is plotting a major expansion of its AI capabilities by allowing leading AI companies to train military-specific models on classified intelligence data. According to MIT Technology Review, the initiative would embed sensitive surveillance reports, battlefield assessments, and other classified information directly into AI models—a significant departure from current practices where models merely answer questions about classified material.
The move comes as the US military intensifies its push to become an "AI-first" warfighting force amid escalating tensions with Iran. OpenAI and Elon Musk's xAI have already secured agreements to operate their models in classified settings, and the Pentagon is now seeking to go further by training new model variants on classified data itself.
How the Classified Training Would Work
Training would occur in secure data centers accredited to host classified government projects, where copies of AI models would be paired with classified datasets. While the Department of Defense would retain ownership of the data, AI company personnel with appropriate security clearances could in rare cases access the information directly.
A defense official told MIT Technology Review that training on classified data is expected to make models significantly more accurate and effective for tasks like analyzing targets in Iran. However, before launching full-scale classified training, the Pentagon first intends to evaluate model performance using non-classified data, such as commercially available satellite imagery.
Pentagon Seeks Alternatives to Anthropic
The classified training push coincides with a dramatic shift in the Pentagon's AI partnership strategy. According to TechCrunch, the military is actively developing alternatives to Anthropic—the company behind Claude—after what sources describe as a "dramatic falling-out" between the two parties.
Anthropic's Claude has already proven valuable in classified environments, including for analyzing targets in Iran. However, the Pentagon's new approach clearly favors OpenAI and xAI as primary partners for its most sensitive AI initiatives.
Security Risks Loom Large
Aalok Mehta, director of the Wadhwani AI Center at the Center for Strategic and International Studies and formerly of Google and OpenAI, warned that training on classified data presents unprecedented risks. The biggest concern: models could potentially memorize and inadvertently leak classified information, creating new security vulnerabilities that didn't exist when AI simply answered questions about sensitive material.
The Pentagon declined to comment on its specific AI training plans as of publication time.