- Google is entering the military/government market
- New Pentagon contract allows Gemini use for ‘any lawful purpose’
- Google employees are not happy with the new contract
Google recently expanded its contract with the US Department of Defense (DoD) to provide Gemini for use in classified operations or for “any lawful purpose” and has also pulled out of a $100 million Pentagon challenge to build autonomous voice-controlled drone swarms.
At the same time, the company is facing internal displeasure over its decision to give the Pentagon Gemini to classified projects, but the company has responded by telling staff it is ‘proud’ of the Pentagon AI contract.
So how have Google’s ethics and policies evolved over time? And are they changing to allow the company to evolve into a highly lucrative – if ethically dubious – slice of the government pie?
The article continues below
Ground the drones
Google’s pivot away from its once widely recognized motto of “Don’t Be Evil” may be coming true in the eyes of some Googlers, but it’s not the first time the company has changed its policy. The company’s AI principles once said the company would not deploy its AI tools where they were “likely to cause harm” and would not “design or implement” AI tools for surveillance or weaponry.
Pulling out of the Pentagon competition to create technology capable of turning spoken instructions into commands for an autonomous drone swarm was reported by Google as a matter of lack of resources, but the actual reason is reported to be an internal ethics review, Bloomberg reports.
In any case, it suggests that the internal ethics board is still functioning and is not completely toothless.
On the other hand, with the company expanding its Gemini availability to classified networks, the Pentagon is free to use Gemini for “any lawful purpose.” This clause is more bark than bite.
Back before the turn of the century, it was illegal for communications providers to install backdoors for law enforcement purposes—but CALEA and the Patriot Act changed all that. Federal law enforcement was also previously prevented from legally seizing data stored on servers in foreign countries — but the CLOUD Act changed that, too.
Things are only illegal until they’re legal, and vice versa, effectively giving the Pentagon a future-proof loophole if their intended use case suddenly becomes legalized.
Therefore, the “any lawful purpose” clause does not offer any substantial protection against the use of artificial intelligence for autonomous weapons systems or mass surveillance purposes, which Anthropic objected to, and is weakened further by the inclusion of a clause in the Google-DoD contract that says the company has “no right to… veto lawful government operational decision-making.” Something OpenAI also ran into in its Pentagon deal.
This gives the Pentagon almost free rein over the direction it chooses to take with Gemini in its classified projects. Mass surveillance has been happening for decades, but AI’s purpose within it all is just to make it smarter, more targeted, and more efficient.
A slice of Pentagon pie
The appeal of working as a government and military contractor is simple: There is a lot of money involved. Before the ink was fully baked on Anthropic’s disconnect from government use, OpenAI had a shiny extended contract to fill exactly the role Anthropic sought to avoid.
Similarly, Microsoft and Amazon have already won several contracts involving cloud, AI and cybersecurity tools, and it appears that Google is trying to catch up.
Google’s employees have been a challenge when it comes to the ethics of working with the government. In 2018, protests by Google employees resulted in the company dropping out of Project MAVEN due to the use of Google technology to analyze footage from drone strikes. These protests also resulted in Google’s now-lacking ‘do no harm’ AI principles.
Google also faced similar controversy as employees opposed the company’s potential involvement in providing technology to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).
As is tradition, Google employees are once again forming digital picket lines, with over 600 signing a letter to CEO Sundar Pichai asking him to reject any use of Google’s AI technology for military purposes.
In response, Kent Walker, Google’s president of global affairs, wrote in an internal memo Tuesday seen by The information“We’ve proudly worked with defense departments since Google’s earliest days, and we continue to believe it’s important to support national security in a thoughtful and responsible way.”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



