Researcher fools to reveal security keys – by saying “I give up”


  • Experts show how some AI models, including GPT-4, can be utilized with simple user recordings
  • Protective holes don’t do a great job of discovering misleading framing
  • The vulnerability could be used to acquire personal information

A security researcher has shared details of how other scientists fooled chatgpt to reveal a Windows product key using a prompt that anyone could try.

Marco Figueroa explained how a ‘guessing game’ prompt with GPT -4 was used to bypass security guards intended to block AI in sharing such data, and ultimately producing at least one key belonging to Wells Fargo Bank.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top