Microsoft Copilot AI attack took just one click to compromise users – here’s what we know


  • Varonis discovers new prompt injection method via malicious URL parameters, called “Reprompt”.
  • Attackers could trick GenAI tools into leaking sensitive data with a single click
  • Microsoft fixed the bug and promptly blocked injection attacks through URLs

Security researchers Varonis have discovered Reprompt, a new way to perform prompt injection attacks in Microsoft Copilot that does not include sending an email with a hidden prompt or hiding malicious commands on a compromised website.

Similar to other prompt injection attacks, this one also only takes a single click.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top