- 30% of the British give AI -Chatbots confidential personal information
- Research from NymVPN shows that corporate and customer data are also at risk
- Emphasizes the importance of taking precautions, such as using a VPN quality
Almost one in three Britons share sensitive personal data with Ai Chatbots Like Openais Chatgpt, according to research from Cybersecurity Company Nymvpn. 30% of the British have fed AI -Chatbots with confidential information such as health and bank data, which potentially put their privacy – and others – at risk.
This oversharing with them as Chatgpt and Google Gemini comes despite 48% of respondents expressing concerns about privacy over AI -Chatbots. This signalizes that the question extends to the workplace, where employees share sensitive company and customer data.
Nymvpn’s finding comes in the wake of a number of recent high -profile data violations, especially Marks & Spencer Cyber attackshowing how easily confidential data can fall into the wrong hands.
“Convenience is prioritized rather than security”
Nymvpn’s research reveals that 26% of respondents admitted to having revealed financial information related to wages, investments and mortgage loans to AI -Chatbots. Risarizing still, 18% shared credit card or bank account data.
24% of those investigated by NymVPN admit to having shared customer data -including names and e -mail addresses -with AI -Chatbots. More worrying still, 16% uploaded company’s financial data and internal documents such as contracts. This is despite the fact that 43% express concern about sensitive company data that is leaked by AI tools.
“AI tools have quickly become part of how people work, but we see a worrying trend where convenience is prioritized for security,” said Harry Halpin, CEO of Nymvpn.
M&S, Co-opand Adidas Has all been in the headlines for the wrong reasons after falling victim of data violations. “High -profile violations show how vulnerable even large organizations can be, and the more personal and company data that is brought into AI, the greater the target of cyber criminals,” Halpin said.
The meaning of not outlining
Since almost a quarter of the respondents share customer data with AI -Chatbots, this emphasizes that they are urging companies that implement clear guidelines and formal policies for the use of AI in the workplace.
“Employees and companies are urgent to think about how they protect both personal privacy and company data when using AI tools,” Halpin said.
Although avoiding AI -Chatbots would be the optimal solution for privacy, it is not always the most practical. Users should at least avoid sharing sensitive information with AI -Chatbots. Privacy settings can also be adjusted, such as disabling chat history or opting out of model training.
A VPN can add a layer of privacy when using AI chatbots such as Chatgpt, encrypting a user’s Internet traffic and original IP address. This helps keep a user’s location privately and prevents their ISP from seeing what they are doing online. Still yourself Best VPN Is not enough if sensitive personal data is still transferred to AI.



