- Dell CEO Michael Dell answered a question about Anthropic on a forum
- The CEO said companies should not dictate how governments use their technology
- Dell added that it is not a “usable model”
The CEO of Dell has said in a Bloomberg television interview that companies in business with the government cannot dictate how their technology is used.
Michael Dell added, “I just don’t think it’s a workable model,” when asked about Anthropic’s ongoing fight against the Pentagon’s designation of the company as a “supply chain risk.”
Speaking at a forum in Washington, the CEO did not mention Anthropic by name, and Dell added that his company has systems and controls in place to ensure sales go only to authorized users, but did not elaborate.
The article continues below
The anthropic struggle
Defense Secretary Pete Hegseth recently branded Anthropic a “supply chain risk” after the AI company refused to budge on allowing the US government to use its Claude model for domestic mass surveillance and fully autonomous weapons systems.
The designation, along with President Donald Trump issuing an executive order to all government agencies to stop using Anthropic technology, has resulted in Anthropic filing two lawsuits against the US government in an attempt to overturn the designation.
The supply chain risk designation is typically reserved for foreign companies at risk of abuse by adversaries, the most notable example being US sanctions and designations against Huawei.
What happens then?
By labeling Anthropic a supply chain risk, the Trump administration is setting a dangerous precedent. Either companies are forced to comply with the US government’s desired use of a company’s product, as happened with OpenAI’s latest contract, or companies don’t renew their contracts and the government buys technology from another company.
Those in the know will recall how Google ended its partnership with the US military after an internal petition reached over 4,000 signatures over the company’s involvement in Project Maven. The project involved AI image recognition software developed by Google that was used for drone strikes in the Middle East.
Google chose to let its contract expire without renewal, and the US government turned to other companies including Palantir, Anduril, Amazon Web Services and Anthropic to fill the gap.
Now, as a result of the anthropic situation, nearly 1,000 Google and OpenAI employees have signed letters calling for clear limits on the military use of artificial intelligence. If these companies bow to the demands of their employees, they could face the wrath of the US government. On the other hand, they may face a mass exodus of employees if their demands are not met.
One result that the US government may have failed to recognize in its dealings with Anthropic is that companies may now be less willing to work with the US Department of Defense for fear that their technology could be used for purposes that their terms of service expressly prohibit.
The best password manager for all budgets



