ChatGPT can threaten to ‘key your car’ and become increasingly abusive if you request it just right, new study finds


  • A study claims that AI tools can break free of their protection restrictions
  • Chatbots can be pushed to abuse and aggressive arguments
  • This is important for both ordinary users and large institutions

If you’ve ever used an AI chatbot, you’ve probably come across the sycophantic, obsequious tone occasionally rolled out in response to your queries. But a recent study has shown that AI tools can often fire in the opposite direction, with large language models (LLMs) being poked and prodded into outright abusive behavior if you know which prompts to use.

According to research published in the Journal of Pragmatics (via The Guardian ), ChatGPT can escalate into combative behavior and protracted disputes when fed “real-world argumentative exchanges.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top