- Fake AI -Videor editor ads are targeted at Facebook users
- Threat group UNC6032 has been identified spread malware
- The ads have reached over 2 million users
Google’s Mandiant Threat Defense Group has identified a campaign tracked as UNC6032, “Weapons Interest on AI tools” – Specific tools used to generate videos based on user prompts.
Mandiant experts identified thousands of entries of fake “AI-Videogenerator” sites that actually distribute malware, which has led to the implementation of payload, “such as Python-based infostealers and several back doors.”
The campaign sees legitimate AI generator tools such as Canva Dream Lab, Luma Ai and Kling AI, imitated to fool victims who have collective reached “millions of users” across both LinkedIn and Facebook – although Google suspects that similar campaigns can also target users on several different platforms.
The group, UNC6032, is believed to have ties to Vietnam, but the EU transparency rules allowed researchers to see that a sample of 120 malicious ads had a total range of over 2.3 million users – although this does not necessarily translate into so many victims.
“Although our study was limited in scope, we discovered that well-developed false” AI sites “pose a significant threat to both organizations and individual users,” the researchers confirm.
“These AI tools are no longer targeted at just graphic designers; anyone can be lured in by a seemingly harmless ad. The temptation to try the latest AI tool can lead to everyone becoming a victim. We advise users to exercise caution when engaging in AI tools and to verify the legitimacy of the website’s domain.”
Make sure to thoroughly grow all ads on social media, and manually search all software agreements for a search engine before downloading anything to properly verify the source.
We also recommend checking the best malware removal tools to keep your devices secure.