- Chatgpt-Server-Side Error allows attackers to steal data without any user interaction
- Shadowleak bypassing traditional end point security altogether
- Millions of business users could be exposed due to shadowed exploits
Companies are increasingly using AI tools such as Chatgpt’s deep research agent to analyze emails, CRM data and internal reports for strategic decision making, experts have warned.
These platforms offer automation and efficiency, but also introduce new security challenges, especially when sensitive business information is involved.
Radware recently revealed a zero-click error in Chatgpt’s deep research agent, called “Shadowleak,” but unlike traditional vulnerabilities, this error distinguishes sensitive data hidden.
Shadowleak: a zero-click, server page utilization
It allows attackers to exfiltrating sensitive data completely from Openai servers without requiring interaction from users.
“This is the very zero-click attack,” said David Aviv, head technology officer at Radware.
“No user action is required, no visible signal and no way for the victims to know that their data has been compromised. Everything is completely behind the scenes through autonomous agent actions on Openai Cloud servers.”
Shadowleak also operates independently of final points or networks, making detection extremely difficult for business security teams.
The researchers demonstrated that simply sending an e -mail with hidden instructions could trigger the deep research agent to leak information autonomously.
Pascal Geenens, director of Cyber Threat Intelligence at Radware, explained that “companies that adopt AI cannot rely on built -in protective measures alone to prevent abuse.
“AI-driven workflows can be manipulated in ways not yet expected, and these attacking vectors often bypass the visibility and detection features of traditional security solutions.”
The vulnerability represents the first clean server-side zero-click-data ex-filtration, which leaves hardly any evidence from the company’s perspective.
With Chatgpt reporting over 5 million paying business users, the potential exposure scale is significant.
Human supervision and strict access control remains critical when sensitive data is connected to autonomous AI agents.
Therefore, organizations that adopt AI should approach these tools with caution, continuously evaluate security holes and combine technology with informed operational practice.
How to remain safe
- Implement layered cyber security defense to protect against several types of attacks simultaneously.
- Monitor regular AI-driven workflows to detect unusual activity or potential data leaks.
- Implement the best antivirus solutions across systems for protection against traditional malware attacks.
- Maintain robust ransomware protection to protect sensitive information from lateral movement threats.
- Enforce strict access controls and user permissions for AI tools that interact with sensitive data.
- Provide human supervision when autonomous AI agents have access to or process -sensitive information.
- Implement logging and revision of AI agent activity to identify deviations early.
- Integrate additional AI tools for deviation of anomaly and automated security warnings.
- Educate employees about AI-related threats and the risk of autonomous agent work.
- Combine software defense, operational best practice and continuous vigilance to reduce exposure.



