Just hours after President Trump ordered all federal agencies to stop using Anthropic’s AI technology, the US military reportedly deployed that very AI in a major airstrike on Iran. The move exposes a baffling contradiction at the heart of government AI use: a technology banned one moment, yet deemed indispensable the next.
The president’s directive came amid ongoing tensions between the Department of Defense and Anthropic, the AI company behind the Claude model. Trump’s post on Truth Social demanded an ”immediate cessation” of federal reliance on Anthropic tools, setting a six-month phase-out timeline for departments currently dependent on the system-including the Pentagon. Yet, according to reporting by The Wall Street Journal, US forces tapped Anthropic’s AI capabilities within hours of this announcement to guide critical military operations.
This isn’t the military’s first rodeo with Anthropic. Claude was previously employed during the controversial operation to capture Venezuelan leader Nicolás Maduro, underlining the Pentagon’s growing appetite for advanced AI to bolster battlefield decisions. But the simultaneous ban and usage highlight a growing dependency fraught with bureaucratic and strategic complications.
Looking ahead, the Department of Defense plans to transition away from Anthropic’s offerings, eyeing competitors like Elon Musk’s xAI and OpenAI as potential replacements. Both companies recently inked deals to integrate their AI models within government systems. Yet, The Wall Street Journal notes the replacement process could drag on for months, prolonging the awkward phase where banned tech remains mission-critical.
The crux of the matter is the difficulty of untangling military operations from specific AI tooling once integrated. Anthropic’s AI, praised for certain capabilities, is evidently woven into tactical workflows that cannot be easily swapped out-especially in high-stakes conflict scenarios. This liminal state reflects broader challenges governments face globally as they race to harness AI without losing control or stalling operations.
While the Trump administration’s stance on Anthropic may reflect internal politics or security concerns, the smooth pivot to alternative providers remains unproven. Past military experiments with AI, from IBM’s Watson in defense projects to numerous classified initiatives, have often stumbled on implementation and trust issues. As the Pentagon moves forward, it must balance rapid innovation with operational security-and reconcile the paradox of banning a technology it still depends on.
Ultimately, this episode underscores the urgent need for clearer policies governing AI in defense. Abrupt bans risk undermining readiness, while lingering reliance on specific vendors creates vulnerabilities. The US military’s experience could serve as a cautionary tale for other nations scrambling to regulate AI use in national security without hobbling their capabilities.
