- Microsoft’s December 2024 complaint concerns 10 anonymous defendants
- “Hacking-as-a-service operation” stole legitimate users’ API keys and bypassed content protection
- The Virginia district’s complaint has led to a Github repository and website being pulled
Microsoft has accused an unnamed collective of developing tools to deliberately bypass the security programming in its Azure OpenAI Service that powers the AI tool ChatGPT.
In December 2024, the tech giant filed a complaint in the US District Court for the Eastern District of Virginia against 10 anonymous defendants, alleging violations of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, plus federal racketeering laws.
Microsoft claims its servers were accessed to help create “offensive”, “harmful and illegal content”. Although it did not provide further details about the nature of the content, it was clear enough for quick action; it had a Github repository pulled offline and claimed in a blog post that the court allowed them to seize a website related to the operation.
ChatGPT API keys
In the complaint, Microsoft stated that it first discovered users misusing the Azure OpenAI Service API keys used to authenticate them to produce illegal content back in July 2024. It went on to discuss an internal investigation that discovered , that the API keys in question had been stolen from legitimate customers.
“The precise manner in which Defendant obtained all of the API keys used to commit the misconduct described in this complaint is unknown, but it appears that Defendant engaged in a pattern of systematic API- key theft that allowed them to steal Microsoft API keys from multiple Microsoft customers,” the complaint reads.
Microsoft alleges that, with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, plus additional software to allow de3u to communicate with Microsoft – servers.
De3u also worked on bypassing Azure OpenAI Services’ built-in content filters and subsequent auditing of user prompts, allowing DALL-E, for example, to generate images that OpenAI would not normally allow.
“These features, combined with Defendant’s illegal programmatic API access to the Azure OpenAI service, enabled Defendant to transform means to circumvent Microsoft’s content and anti-abuse measures,” the complaint said.
Via TechCrunch