- AI-assisted fraud has grown significantly, making phishing campaigns more convincing
- Deepfake-enabled identity attacks caused verified losses of over $347 million globally
- Subscription-based AI crime creates a stable, growing underground market
Artificial intelligence is now being used by cybercriminals to automate fraud, scale phishing campaigns and industrialize impersonation at a level previously impractical.
Unfortunately, AI-assisted attacks may be among the biggest security threats your business faces this year, but staying alert and acting quickly can keep you one step ahead.
Group-IB’s Weaponized AI report shows that criminals’ growing use of AI represents a distinct fifth wave of cybercrime, driven by the commercial availability of AI tools rather than isolated experiments.
Rise in AI-powered cybercrime
Evidence from dark web surveillance shows that AI-related cybercrime is not a short-term response to new technologies.
Group-IB says first-time dark web posts referencing AI-related keywords increased 371% between 2019 and 2025.
The most pronounced acceleration followed the public release of ChatGPT at the end of 2022, after which the level of interest remained constantly high.
By 2025, tens of thousands of forum discussions each year referenced AI abuse, indicating a stable underground market rather than experimental curiosity.
Group-IB analysts identified at least 251 posts that were explicitly focused on exploiting large language models, with most references tied to OpenAI-based systems.
A structured AI crime economy has emerged, with at least three vendors offering self-hosted Dark LLMs without security restrictions.
Subscription prices range from $30 to $200 per month, with some vendors claiming more than 1,000 users.
One of the fastest growing segments is impersonation services, where mentions of deepfake tools associated with identity verification bypassing are up 233% year over year.
Entry-level synthetic identity kits sell for as little as $5, while real-time deepfake platforms cost between $1,000 and $10,000.
Group-IB recorded 8,065 deepfake-enabled fraud attempts at a single institution between January and August 2025, with verified global losses of $347 million.
AI-assisted malware and API abuse has proliferated, with AI-generated phishing now embedded in malware-as-a-service platforms and remote access tools.
Experts warn that AI-powered attacks can bypass traditional defenses unless teams continuously monitor and update systems.
Networks need protection from firewalls that can identify unusual traffic and AI-generated phishing attempts.
With appropriate endpoint protection, companies can detect suspicious activity before malware or remote access tools spread.
Rapid and adaptive malware removal remains critical because AI-enabled attacks can be executed and propagated faster than standard methods can respond.
Combined with a layered security approach and anomaly detection, these measures help stop intrusions such as deep fake calls, cloned voices and fake login attempts.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



