- Nearly a thousand employees from Google and OpenAI signed an open letter calling for clear limits on the military use of artificial intelligence
- The letter calls on tech companies to push back against the government’s plans for AI surveillance and autonomous weapons
- The move reflects growing tensions in the AI industry over government contracts and defense partnerships
Nearly a thousand employees at Google and OpenAI have signed an open letter urging their companies to resist pressure from the US military to loosen restrictions on how AI systems can be used. The letter declares “We will not be divided” on the topic, even after the Pentagon designated Anthropic a “supply chain risk after the company refused to allow its technology to be used for domestic mass surveillance or fully autonomous weapons.
The move shocked many Silicon Valley observers and set off a wave of concern among the engineers building today’s frontier AI models. Especially since OpenAI and Google are reportedly negotiating to take up the arrangement that Anthropic has rejected.
The signatories couch their message in unusually blunt language for an industry known for cautious corporate communications. The letter claims that officials are trying to pressure AI companies to abandon certain ethical boundaries.
“They are trying to divide each company for fear that the other will give in. That strategy only works if none of us know where the others stand,” the letter said. “This letter serves to create common understanding and solidarity in the face of this pressure from the War Department.”
The open letter is notable as it includes people from rival companies that normally compete fiercely. The argument they make is that AI is now powerful enough that decisions about its use cannot be treated as routine business deals.
These concerns are not purely theoretical. Governments around the world are exploring how artificial intelligence can be integrated into defense planning and intelligence analysis. Military agencies have long used software tools for surveillance and targeting. Advanced generative models could dramatically accelerate these capabilities. And when studies begin to show how AI prefers the nuclear option in wargames, letting it control weapons and surveillance systems seems like an even worse idea.
AI war
It’s a bit of a setback for Google employees, thousands of whom protested the company’s involvement in the Pentagon’s Project Maven plan to use machine learning to analyze drone footage in 2018. After extensive internal backlash, Google ultimately allowed the contract to expire and published a set of ethical guidelines known as its AI Principles.
These principles were intended to define how Google would approach the sensitive use of artificial intelligence. At the time, the company said it would not develop technologies designed to cause harm or enable surveillance that violated international norms. The latest open letter suggests that similar tensions are re-emerging as governments become more interested in implementing powerful language models.
The letter may or may not change the company’s decisions, but at least the workers can point to it as a message that cannot be misunderstood.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



