- Researchers from China and Singapore proposed AURA (Active Utility Reduction via Adulteration) to protect GraphRAG systems
- AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and incorrect responses
- Correct output requires a secret key; tests showed ~94% efficiency in breaking down stolen KG tools
Researchers from universities in China and Singapore came up with a creative way to prevent theft of data used in Generative AI.
Among others, there are two important elements in today’s large language models (LLM): training data and retrieval-augmented generation (RAG).
Training data teaches an LLM how language works and gives it broad knowledge up to a limit point. It does not give the model access to new information, private documents or rapidly changing facts. Once the training is complete, that knowledge is frozen.
Replacement of obsolete gear
RAG, on the other hand, exists because many real questions depend on current, specific, or proprietary data (such as company policies, breaking news, internal reports, or niche technical documents). Instead of retraining the model every time data changes, RAG lets the model retrieve relevant information as needed and then write an answer based on it.
In 2024, Microsoft came up with GraphRAG – a version of RAG that organizes retrieved information as a knowledge graph instead of a flat list of documents. This helps the model understand how entities, facts, and relationships connect to each other. As a result, the AI can answer more complex questions, follow links between concepts and reduce contradictions by reasoning over structured relationships instead of isolated text.
Since these knowledge graphs can be quite expensive, they can be targeted by cybercriminals, nation-states, and other malicious entities.
In their research paper titled Making Theft Useless: Adulteration-Based Protection of Proprietary Knowledge Graphs in GraphRAG Systems, authors Weijie Wang, Peizhuo Lv, et al. proposed a defense mechanism called Active Utility Reduction via Adulteration, or AURA – which poisons the KGs, causing the LLM to give wrong answers and hallucinate.
The only way to get correct answers is to have a secret key. The researchers said that the system is not without its flaws, but that it works well in most cases (94%).
“By degrading the utility of the stolen KG, AURA offers a practical solution for intellectual property protection in GraphRAG,” the authors stated.
Via The register
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



