- Anthropic CEO Dario Amodei does not want Claude to be used by the Pentagon for mass surveillance and autonomous weapons
- A statement has laid bare Anthropic’s reasons for keeping Claude’s safety rails
- Pete Hegseth gave Anthropic until Friday to give the DoD full access
Anthropic CEO Dario Amodei has released a statement regarding the company’s ongoing dispute with the US Department of Defense.
Amodei stated that Anthropic “cannot in good conscience accede” to the DoD’s request to provide full access to its AI models, fearing they could be used for “mass surveillance” and “fully autonomous weapons.”
US Defense Secretary Pete Hegseth has threatened to label Anthropic a “supply chain risk” and invoke the Defense Production Act to force the company into compliance.
Unprecedented threats to Anthropic
In his statement, Amodei said that Anthropic has historically had a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security.
Amodei also noted that the company has complied with US regulations on the use and sale of AI models to China, to the extent that it chose to “forgo hundreds of millions of dollars in revenue” by preventing the use of Claude by the Chinese Communist Party.
“Anthropic understands that the War Department, not private companies, makes military decisions,” Amodei continued. “However, in a narrow set of cases, we believe that artificial intelligence can undermine, rather than defend, democratic values.”
But the hesitation to give DoD full access to Claude surrounds the potential misuse of the model for two nefarious purposes.
Regulations around AI have not caught up with the capabilities of AI models like Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass surveillance in the home.
Theoretically, the government could buy highly detailed records and use AI models to organize it into a highly accurate reflection of American citizens on a scale never seen before.
As for the use of artificial intelligence in weapons systems, Amodei says they “could prove critical to our national defense,” but he argues that current AI models are “simply not reliable enough to operate fully autonomous weapons.” If an AI model responsible for an autonomous weapon system were to suffer a hallucination, the responsibility would likely fall on the model developer.
Amodei also addresses the threats made by Hegseth, saying they are “inherently contradictory: one labels us as a security risk; the other labels Claude as essential to national security.”
The statement concludes that Anthropic’s “strong preference is to continue serving the Department and our warfighters — with our two desired safeguards in place.”
“Should the department choose to leave Anthropic, we will work to enable a smooth transition to another provider and avoid any disruption to ongoing military planning, operations or other critical missions. Our models will be available on the expansive terms we have proposed for as long as necessary.”
The best protection against identity theft for all budgets



