- OpenClaw exposures reveal thousands of Internet-accessible high-risk systems
- AI agents are being deployed with excessive permissions across critical environments
- Remote code execution vulnerabilities expose most observed OpenClaw implementations
Agent systems are moving quickly from experimentation to everyday workflows, but recent findings suggest that security practices are not keeping pace.
According to SecurityScorecard, thousands of OpenClaw implementations are exposed directly to the Internet with minimal security measures.
The team identified 40,214 Internet-exposed OpenClaw instances in total, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the Internet.
The article continues below
Vulnerable AI agents become a hacker’s dream target
“The math is simple: When you give an AI agent full access to your computer, you give the same access to anyone who can compromise it,” the researchers stated.
Approximately 63% of observed deployments appear to be vulnerable to remote code execution, which allows attackers to take over the host machine without user interaction.
Of the exposures, there were three common vulnerabilities and high severity exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8.
Public exploit code is already available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise vulnerable systems.
The research also found that 549 exposed cases correlate with past breach activity and 1,493 are associated with known vulnerabilities that increase the risk to users.
The exposed deployments are heavily concentrated in large cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns.
OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent that can schedule meetings, send emails and manage tasks on behalf of users.
The problem is not the capabilities of the AI, but the access and permissions to these systems without proper security controls.
“In practice, because it was written by AI, security was not a dominant feature in the development process,” said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard.
“For people who want to use the more agentic AI systems, you really need to think about what integrations you support and what permissions you actually grant.”
Many users configure these bots with personal and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers.
Every time a user connects an AI agent to a platform, they give it an identity with specific permissions.
This identity may send content, access email, read files, or interact with other systems on the user’s behalf.
“The risk is not that these systems think for themselves,” Turner said. “It’s that we give them access to everything.”
“It’s like handing your laptop to a stranger on the street and hoping nothing bad happens… Any of the communications… on that device… will be untrusted third-party interfaces that can… perform certain actions.”
A compromised agent can be instructed to transfer money, delete files, or send malicious messages without immediately raising the alarm because the behavior appears legitimate.
Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices.
Users are being asked to give these agents broad system access, and in many cases this has already led to data exposure, unintended actions and loss of control.
In some cases, OpenClaw takes actions beyond what users explicitly instruct, and Microsoft has since warned that it should not be run on standard personal or corporate devices.
Chinese authorities have restricted its use in office environments due to its propensity for data exposure and broader security risks.
Some OpenClaw vulnerabilities allow hackers to access sensitive data, and it has been used to distribute malware through GitHub repositories.
“You shouldn’t just blindly download one of these things and start using it on a system that has access to your entire personal life. Build in some separation and run some of your own experiments before you really trust the new technology to do what you want it to do,” Turner said.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



