- Google employees sign open letter to CEO over concerns about military AI use
- AI developers don’t want their technology used for “classified purposes”
- Google is currently negotiating a contract with Pengaton
Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any use of their AI technology for military purposes.
The open letter highlights the serious ethical concerns staff have, saying: “Human lives are already being lost and civil liberties are being jeopardized at home and abroad because of the misuse of the technology we play a key role in building.”
“As people who work with artificial intelligence, we know that these systems can centralize power and that they make mistakes,” the letter said. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”
The article continues below
Another term for ‘supply chain risk’?
In March, Anthropic CEO Dario Amodei refused to allow the Pentagon to use the Claude model over fears they could be used for “mass surveillance” and “fully autonomous weapons,” leading Defense Secretary Pete Hegseth to declare the company a “supply chain risk.”
Soon after, OpenAI stepped up to fill the gap left by Anthropic, with CEO Sam Altman facing both internal and external criticism over his apparent willingness to allow military use of ChatGPT.
The new OpenAI contract with the Pentagon was full of loopholes that would have allowed the same use of ChatGPT that Anthropic feared for Claude. The contract was amended to state that OpenAI’s models would not be used for the “intentional tracking, monitoring, or surveillance of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Soon after, Sam Altman told his staff that the Pentagon has said that OpenAI is not “going to make operational decisions” about how the military uses AI technologies.
Now, Googlers join the growing number of AI company employees and members of the public opposed to military use of AI tools. “Making the wrong call right now would cause irreparable damage to Google’s reputation, business and role in the world,” the letter said.
After protests involving Google employees in 2018, the company changed its AI principles to state that it would not deploy its AI tools where they were “likely to cause harm” and would not “design or implement” AI tools for surveillance or weaponry. These clauses were quietly removed from its AI Principles on February 4, 2025.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



