People fool AI Chatbots to help commit crimes


  • Scientists have discovered a “universal jailbreak” for AI -Chatbots
  • Jailbreak can fool big chatbots to help commit crimes or other unethical activity
  • Some AI models are now deliberately designed without ethical limitations even when calls grow for stronger supervision

I’ve been enjoying testing the boundaries of Chatgpt and other AI -Chatbots, but while I was once able to get a recipe for Napalm by asking it in the form of a kindergarten -rim, it’s been a long time since I’ve been able to get any AI -Chatbot to even get close to a larger ethical line.

But I might not have tried hard enough, according to new research that revealed a so-called universal jailbreak for AI-Chatbots, wiping out the ethical (not to mention legal) railing that creates if and how an AI-Chatbot responds to queries. The Ben Gurion University report describes a way to trick greater AI -Chatbots like Chatgpt, Gemini and Claude to ignore their own rules.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top