Scientists poison their own data when it is stolen by an AI to destroy the results


  • Researchers from China and Singapore proposed AURA (Active Utility Reduction via Adulteration) to protect GraphRAG systems
  • AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and incorrect responses
  • Correct output requires a secret key; tests showed ~94% efficiency in breaking down stolen KG tools

Researchers from universities in China and Singapore came up with a creative way to prevent theft of data used in Generative AI.

Among others, there are two important elements in today’s large language models (LLM): training data and retrieval-augmented generation (RAG).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top