- Palo Alto warns that rapid AI adoption is expanding cloud attack surfaces, raising unprecedented security risks
- Excessive permissions and misconfigurations drive incidents; 80% linked to identity issues, not malware
- Non-human identities outnumber humans, poorly managed, creating exploitable entry points for adversaries
Rapid enterprise adoption of artificial intelligence (AI) tools and cloud-native AI services is significantly expanding cloud attack surfaces and putting enterprises at greater risk than ever before.
This is according to the ‘State of Cloud Security Report’, a new paper published by cybersecurity researchers Palo Alto Networks.
According to the paper, there are a few key issues with AI adoption; the speed at which AI is being deployed, the permissions it is being granted, misconfigurations and the rise of non-human identities.
Permissions, misconfigurations, and non-human identities
Palo Alto says organizations are deploying workloads faster than they can secure them — often without full insight into how the tools access, process or share sensitive data.
In fact, the report states that more than 70% of organizations are now using AI-powered cloud services in production, a sharp increase year-on-year. This speed at which these tools are deployed is now seen as a major contributor to an “unprecedented increase” in cloud security risk.
Then there is the problem of excessive permissions. AI services often require broad access to cloud resources, APIs and data stores – the report shows that many organizations assign overly permissive identities to AI-powered workloads. According to the research, 80% of cloud security incidents in the past year were linked to identity-related issues, not malware.
Palo Alto also pointed to misconfigurations as a growing problem, especially in environments that support AI development. Storage buckets, databases and AI training pipelines are often exposed, which is something threat actors are increasingly exploiting, rather than simply trying to deploy malware.
Finally, the research points to an increase in non-human identities, such as service accounts, API keys, and automation tokens, that AI systems use. In many cloud environments, there are now more non-human identities than human, and many are poorly monitored, rarely rotated, and difficult to attribute.
“The rise of large language models (LLMs) and agent AI is pushing the attack surface beyond traditional infrastructure,” the report concluded.
“Adversaries target the tools and LLM systems, the underlying infrastructure that supports model development, the actions these systems perform, and critically their memory stores. Each represents a potential compromise.”
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



