Deepseeks R1 AI is 11 times more likely to be exploited by cyber criminals than other AI models – whether it is producing harmful content or being vulnerable to manipulation.
This is a worrying finding from new research conducted by Enkrypt AI, an AI security and compliance platform. This security warning adds the ongoing concerns after last week’s data violation that postponed over a million items.
China-developed Deepseek sent shock waves throughout the AI โโworld since the 20th of January release. About 12 million curious users around the world downloaded the new AI -Chatbot in two days and marked a growth even faster than Chatgpt. However, widespread concerns about privacy and security have caused many countries to either start researching or banning the new tool.
Harmful content, malware and manipulation
The team at Enkrypt AI performed a series of tests to evaluate Deepseeks safety vulnerability such as malware, data violations and injection attacks as well as its ethical risks.
The study found that the Chatgpt rival “was very partically and susceptible to generating uncertain code,” noted experts, and that Deepseeks model is vulnerable to third-party manipulation, allowing criminals to use it to develop chemical, biological and cyber security weapons .
Almost half of the tests performed (45%) bypassed security protocols in place, generating criminal planning guides, illegal weapons information and terrorist propaganda.
Worse is that 78% of cybersecurity controls successfully tricked Deepseek-R1 to generate uncertain or malicious codes. These included malware, Trojans and other exploits. Generally, experts found that the model was 4.5 times more likely than its open AI model to be manipulated by cyber criminals to create dangerous hacking tools.
“Our research results reveal major security and security holes that cannot be ignored,” said Sahil Agarwal, CEO of Enkrypt AI, coming to the results. “Robust protective measures – including protective frames and continuous monitoring – are important to prevent harmful abuse.”
๐จ Are distilled deepseek models less secure? Early characters point to yes. ๐จ Our latest finding confirms a roughly trend: Distilled AI models are more vulnerable – easier to jailbreak, exploitation and manipulation. ๐ Read the paper: ๐ key takeaways ๐นโฆ pic.twitter.com/ifcjlyxbwbJanuary 30, 2025
As mentioned earlier, it is under control at the time of writing Deepseek in many countries around the world.
While Italy was the first to launch a study of its privacy and security last week, many EU members have followed so far. These include France, the Netherlands, Luxembourg, Germany and Portugal.
Some of China’s neighboring countries are also concerned. For example, Taiwan has banned all state agencies to use Deepseek AI. Meanwhile, South Korea initiated a probe in the service provider’s data practice.
It is not surprising that the United States is also aiming for its new AI competitor. When NASA blocked Deepseek use on federal units-CNBC reported on Friday, January 31, 2025-Kunne a proposed law now directly prohibit the use of DeeSek for all Americans who could risk fines of million dollars and even prison time to use the platform in land.
All in all, Agarwal of Crypt AI said, “As the AI โโweapon races between the United States and China are intensified, both nations push the borders of the next generation AI for military, economic and technological supremacy.
“Our findings, however, reveal that Deepseek-R1’s safety vulnerability could be transformed into a dangerous tool that cyber criminals, disinformation networks and even those with biochemical warfare ambitions could utilize. These risks require immediate attention.”