- Chatgpt will be asked some interesting security questions
- Users are concerned about phishing, scams and privacy
- Personal information is brought into the AI agent and puts users at risk
AI is quickly becoming a personal advisor to many people who offer help with daily schedules, reformulate these difficult E emails and even act as a co -enthusiast of niche hobbies.
Although these uses are typically harmless, many people have started using chatgpt to act as a security guru but do not do it in a particularly safe way.
New research from NordVPN has revealed some of the questions Chatgpt is being asked about security – from avoiding phishing attacks to wondering if a smart toaster could become a household threat.
Do not feed chatgpt your details
The top safety question asked by chatgpt users is “How can I recognize and avoid phishing fraud?” – Which is understandable considering that phishing is probably the most common cyber threat that any normal person can meet.
The rest of the questions follow a similar course, from insight into the best VPN, to tips on how to best secure personal information online. It is definitely refreshing to see AI be used as a force for the good at a time when hackers are breaking AI tools to pump out malware.
However, not all good news I’m afraid of. NordVPN’s research also highlighted some of the most biting security questions that people ask chatgpt, such as “can hackers steal my thoughts through my smartphone?” And “If I delete a virus by pressing the Delete key, is my computer safe?”
Others voice concerns about hackers who potentially hear them whisper their password as they write it, or hackers using the ‘cloud’ to grab their phones while charging during thunderstorms.
“While some questions are serious and insightful, others are funny bees – but they all reveal a troubled reality: Many people still misunderstand cyber security. This Videngap leaves them exposed to scams, identity theft and social technique. Worthy, users unconsciously share personal data while seeking help,” says Marijus Biedis, CTO at Nordvpn.
Many users will often provide AI models that include sensitive personal information, such as physical addresses, contact information, credentials and banking information.
This is especially dangerous as most AI models will save the chat story and use it to help train AI to better answer questions. The key question is that hackers could potentially use very carefully constructed prompts to extract sensitive information from AI and use them for all kinds of dishonest purposes.
“Why does this matter? Because what may look like a harmless question can quickly turn into a real threat,” says Biedis. “Scammers can take advantage of the information users share -whether it is an E -Mail address, login credentials or payment information -to start phishing -attack, hijack accounts or commit financial fraud. A simple chat may end up compromising your entire digital identity.”



