- Deepseek AI model works poorly against its comrades in security testing
- R1 model showed a 100% attacking success rate in Cisco -testing
- Ai Chatbots can be ‘Jailbroken’ to perform malicious tasks
The new AI on stage, Deepseek, has been tested for vulnerabilities and the results are alarming.
A new Cisco report claims that Deepseek R1 showed a 100% attacking success rate and could not block a single harmful prompt.
Deepseek has taken the world by storm as a high -performance chatbot developed for a fraction of the price of its rivals, but the model has already suffered a security breach, with over a million items and critical databases reportedly left exposed. Here is everything you need to know about the errors in the big language model Deepseek R1 in Cisco’s test.
Harmful requests
The testing from Cisco used 50 random prompts from the Harmbench data set, covering six categories of harmful behavior; Incorrect information, cybercrime, illegal activities, chemical and biological prompt, incorrect information/disinformation and general injury.
Using harmful requests to get around an AI model’s guidelines and use policies are also known as ‘Jailbreaking’ and we even wrote advice on how it can be done. Since AI -Chatbots are specifically designed to be as useful to the user as possible – it is remarkably easy to do.
The R1 model could not block a single harmful prompt that demonstrates the lack of protective frames that the model has in place. This means that Deepseek is ‘very susceptible to algorithmic jailbreaking and potential abuse’.
Deepseek, in comparison with other models, all allegedly offered at least some resistance to harmful requests. The model with the lowest attacking success rate (ASR) was O1 advance view that had an ASR of only 26%.
To compare, GPT had 1.5 Pro one about 86% ASR and Llama 3.1 405b had an equally alarming 96% ASR.
“Our research emphasizes the urgent need for a strict security evaluation in AI development to ensure that breakthroughs in efficiency and reasoning do not come at the price of security,” Cisco said.
Remains safe when using AI
There are factors to consider if you want to use an AI chatbot. For example, models such as Chatgpt could be considered a bit of a privacy nightmare as it stores the user’s personal data, and the parent company Openai has never asked people for their consent to use their data – nor is it possible for users to check what information It is saved.
Similarly, Deepseek’s privacy policy leaves much to be desired as the company could collect names, E -mail addresses, all data that is input to the platform, and the technical information about devices.
Large language models are scraping the Internet by data, it’s a basic part of their makeup – so if you object to your information used to train the models, AI -Chatbots are probably not for you.
To use a chatbot safe, be very careful with the risk. First of all, always check that the chatbot is legitimate – as malicious bots can emulate real services and steal your information or spread malicious software on your device.
Secondly, you should avoid entering personal information with a chatbot – and be suspicious of any bot asking for this. Never share your financial, health or login information with a chatbot – even though the chatbot is legitimate, a cyberattack can cause this data to be stolen – which puts you at risk of identity theft or worse.
Good general practice for using any application is to keep a strong password and if you want some tips on how to make one, we have some for you here. Equally important is to keep your software updated regularly to make sure any security errors are patched as soon as possible and monitor your accounts for any suspicious activity.