Microsoft claims its servers were accessed illegally to create unsafe AI content


  • Microsoft’s December 2024 complaint concerns 10 anonymous defendants
  • “Hacking-as-a-service operation” stole legitimate users’ API keys and bypassed content protection
  • The Virginia district’s complaint has led to a Github repository and website being pulled

Microsoft has accused an unnamed collective of developing tools to deliberately bypass the security programming in its Azure OpenAI Service that powers the AI ​​tool ChatGPT.

In December 2024, the tech giant filed a complaint in the US District Court for the Eastern District of Virginia against 10 anonymous defendants, alleging violations of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, plus federal racketeering laws.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top