- Experts tore apart an MIT paper to make evidence-free AI claims
- Kevin Beaumont dismissed the findings as almost complete nonsense without proof
- Marcus Hutchins also mocked the research, saying he laughed harder reading its methods
The MIT Sloan School of Management has been forced to retract a working paper which claimed AI played a “significant role” in most ransomware attacks after widespread criticism from experts.
The study, co-authored by MIT researchers and executives from Safe Security, claimed that “80.83 percent of recorded ransomware incidents were attributed to threat actors using AI.”
Published earlier in 2025 and later cited by several outlets, the report drew immediate scrutiny for presenting extraordinary numbers with little evidence.
Questionable research
Among them was prominent security researcher Kevin Beaumont, who described the paper as “absolutely ridiculous” and called its findings “almost complete nonsense.”
“It describes almost every major ransomware group as using AI – without any evidence (not true either, I monitor many of them),” Beaumont wrote in a Mastodon thread
“It even talks about Emotet (which hasn’t been around for many years) as powered by artificial intelligence.”
Cybersecurity expert Marcus Hutchins agreed, saying, “I burst out laughing at the title” and “when I read their methodology, I laughed even harder.”
He also criticized the article for undermining public understanding of threats like ransomware and malware removal practices.
After the backlash, MIT Sloan removed the paper from its website and replaced it with a note saying it was “being updated based on some recent reviews.”
Michael Siegel, one of the authors, confirmed that revisions were underway.
“We received some recent comments on the working paper and are working as quickly as possible to provide an updated version,” Siegel said.
“The main points of the paper are that the use of AI in ransomware attacks is increasing, we should find a way to measure it, and there are things companies can do now to prepare.”
Simply put, he argues that the paper does not claim a definitive global percentage, but serves as a warning about how AI might measure up in cyberattacks.
Even Google’s AI-powered search assistant rejected the claim, saying the number was “not supported by current data.”
The controversy reflects a growing tension in cybersecurity research, where enthusiasm for artificial intelligence can sometimes overtake factual analysis.
AI has real potential in both offense and defense, so improving ransomware protection, automated threat detection and antivirus systems is a good move.
But exaggerating its malevolent use risks distorting priorities, especially when it comes from institutions as prominent as MIT Sloan.
Via the registry
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



