- Genai can Hallucinate Open Source -Package Names, Experts Warn
- It does not always hallucinate another name
- Cyber criminals can use the names to detect malware
Security researchers have warned of a new method in which generative AI (Genai) can be abused in cybercrime, known as ‘slopsquatting’.
It starts with the fact that various Genai tools such as chat-gpt, copilot and others, hallucinate. In connection with AI is “hallucination” when AI simply makes things up. It may constitute a quote that a person never said, an event that never happened, or – in software development – an open source software package that was never created.
According to Sarah Gooding from Socket, many software developers are now very dependent on Genai when writing code. The tool could write the lines themselves, or it could suggest the developer different packages to download and include in the product.
Hallucinating malware
The report adds that AI is not always hallucinating another name or another package – some things repeat.
“When the same hallucination-released-fasting officer was repeated ten times, 43% of hallucinated packages were repeated every time, while 39% never reappeared,” it says.
“Overall, 58% of hallucinated packages were repeated more than once across ten races, indicating that a majority of hallucinations are not only random noise but repeated artifacts of how the models react to certain requests.”
This is purely theoretical at this time, but apparently cyber criminals could map the various packages that AI halluces and – detect them on open source platforms.
Therefore, when a developer gets a suggestion and visits Github, Pypi or the like – they will find the package and fortunately install it without knowing it is malicious.
Fortunately, there are no confirmed cases of slopsquatting in nature at the time of the press, but it is safe to say that it is only a matter of time. Given that the hallucinated names can be mapped, we can assume that security researchers will eventually discover them.
The best way to protect against these attacks is to be careful when accepting suggestions from someone, living or otherwise.