- OpenAI has signed a new contract with the Pentagon
- The wording of the contract made room for AI to be used for mass surveillance in the home
- Sam Altman is criticized for his position on the matter
Following Anthropic’s designation as a supply chain risk by Defense Secretary Pete Hegseth and the loss of their $200 million Pentagon contract, OpenAI is now in the firing line for its own deal with the Pentagon.
Despite OpenAI having a contract clause that prohibits its AI models from being used by the US military in 2023, several OpenAI employees have revealed that their models were previously used by the Pentagon.
At the time, the Pentagon had a contract with Microsoft licensed to use OpenAI technology that allowed Pentagon access through Azure OpenAI, which was not subject to the same policies.
With Anthropic out of the picture due to their refusal to allow the Pentagon to use its models for autonomous weapons systems and mass surveillance at home, OpenAI CEO Sam Altman is now being questioned over the company’s latest contract with the US military.
In 2024, OpenAI lifted the blanket ban on military use of its models, and later went on to sign a contract with Anduril allowing the deployment of its models for national security purposes.
Altman has made clear his support for Anthropic’s stance on preventing Claude from being used for nefarious purposes, but the company’s new deal with the U.S. military left room for exactly the same purposes, sources familiar with the matter told Wired.
Current regulations have lagged behind advances in artificial intelligence, which allows government agencies to buy personal information about American citizens from data brokers and then use AI models to categorize and sort the information to create highly accurate and detailed profiles of citizens.
Commenting on the latest agreement signed between OpenAI and the US military, Noam Brown, an OpenAI researcher, stated, “Over the weekend it became clear that the original language of the OpenAI/DoW agreement left legitimate questions unanswered, particularly around some new ways AI could potentially enable legal surveillance.”
Brown continued: “The language is now updated to address this, but I also firmly believe that the world should not rely on trusting AI labs or intelligence agencies for their safety and security.”
Sarah Shoker, the former head of OpenAI’s geopolitical team, said: “The biggest losers in all of this are ordinary people and civilians in conflict zones. Our ability to understand the effects of military AI in war is and will be severely hampered by layers of opacity caused by technical design and politics. These are black boxes all the way down.”
The best protection against identity theft for all budgets



