The Pentagon just issued an ultimatum to Artificial Intelligence powerhouse Anthropic to drop its objections to using its Claude AI system for autonomous weapons and mass surveillance. If Anthropic doesn’t comply, the Pentagon is threatening to force the company to hand over its technology and blacklist it from all future defense contracts.
The core of this intense dispute revolves around the Pentagon’s stance that it needs to use AI for “all lawful purposes,” which includes potentially deploying lethal autonomous weapons and mass surveillance. Per The Washington Post, Anthropic, on the other hand, has drawn a firm line, insisting that its AI isn’t reliable enough for robotic weaponry or sweeping surveillance without risking both troops and civilians, and possibly undermining democratic values.
Tensions spiraled last month when discussing a hypothetical nuclear strike. A defense official described a scenario where an intercontinental ballistic missile was launched at the United States and asked if Claude could help shoot it down, since every second counts. According to the official, Anthropic CEO Dario Amodei’s response was, “You could call us and we’d work it out.” That answer apparently infuriated the Pentagon, but Anthropic denies the account.
I appreciate that Anthropic is sticking to its guns
Anthropic called the account “patently false,” stating the company has agreed to allow Claude for missile defense. Another incident involving Claude’s potential use in capturing Venezuelan leader Nicolás Maduro has also been cited as a flashpoint.
This whole situation escalated after a face-to-face meeting between Amodei and Defense Secretary Pete Hegseth. Pentagon chief spokesman Sean Parnell took to X to clarify that the department isn’t interested in mass domestic surveillance or deploying autonomous weapons. He stated that this is a “common-sense request” to prevent Anthropic from “jeopardizing critical military operations.”
Amodei reaffirmed Anthropic’s commitment to working with the Pentagon but refusing to budge on its red lines. He believes current AI systems aren’t ready for robotic weaponry and that existing laws don’t cover the vast potential of AI snooping tools. He bluntly stated that these two use cases have never been in their contracts and shouldn’t be now. A great example of the unreliability of AI is to see the chaos Amazon has been facing with its systems.
The Pentagon’s Under Secretary of Defense for Research and Engineering, Emil Michael, has been leading discussions with Anthropic. He believes the government, not individual tech firms, should have the final say on how AI is used. Michael even accused Amodei of having a “God-complex,” claiming Amodei wants to “personally control the US Military” and is okay with “putting our nation’s safety at risk.”
Ultimately, this fight seems more philosophical than technical. As Michael C. Horowitz from the University of Pennsylvania noted, “The Pentagon does not trust that Anthropic will be a reliable vendor, and Anthropic worries about misuse of its technology.”
Published: Mar 1, 2026 04:23 pm