- Pentagon and Antropic are at loggerheads over the use of Claude
- The AI model was allegedly used to capture Nicolás Maduro
- Anthropic refuses to allow its models to be used in “fully autonomous weapons and domestic mass surveillance”
A rift has emerged between the Pentagon and several AI companies over how their models can be used as part of operations.
The Pentagon has asked AI providers Anthropic, OpenAI, Google and xAI to allow the use of their models for “all lawful purposes”.
Anthropic has expressed fears that its Claude models would be used in autonomous weapons systems and domestic mass surveillance, with the Pentagon threatening to terminate its $200 million contract with the AI provider in response.
The $200 million standoff over AI weapons
Speaking to Axios, an anonymous Trump administration adviser said one of the companies has agreed to allow the Pentagon full use of its model, while the other two are showing flexibility in how their AI models can be used.
The Pentagon’s relationship with Anthropic has been strained since January over the use of its Claude models, and the Wall Street Journal reported that Claude was used in the US military operation to capture then-Venezuelan President Nicolás Maduro.
An Anthropic spokesperson told Axios that the company “has not discussed the use of Claude for specific operations with the Department of War.” The company stated that its usage policy with the Pentagon was under review, specifically referring to “our hard limits around fully autonomous weapons and mass surveillance of the home.”
Chief Pentagon spokesman Sean Parnell stated that “Our nation requires our partners to be willing to help our warfighters win in any battle.”
Security experts, policy makers and Anthropic CEO Dario Amodei have called for greater regulation of AI development and increased requirements for protection, with specific reference to the use of AI in weapons systems and military technology.
The best parental controls for all budgets



