- GenAI SaaS usage tripled, with rapid volumes increasing sixfold in one year
- Almost half of users rely on unapproved “Shadow AI”, creating huge visibility gaps
- Sensitive data leaks doubled, with insider threats linked to personal cloud app usage
Generative artificial intelligence (GenAI) may be great for productivity, but it comes with some serious security and compliance complications. This is according to a new report from Netskope, which says that as the use of GenAI in the office skyrockets, so do policy violations.
In its Cloud and Threat Report: 2026, released earlier this week, Netskope said GenAI Software-as-a-Service (SaaS) usage among enterprises is “rapidly increasing” and the number of people using tools like ChatGPT or Gemini has tripled over the year.
Users are also spending significantly more time on the tools – the number of prompts people send to apps has also increased sixfold in the last 12 months, from 3,000 a year ago to more than 18,000 prompts per month today.
Shadow AI
What’s more, the top 25% of organizations send more than 70,000 inquiries per month, and the top 1% send more than 1.4 million inquiries per month.
But many of the tools and their use cases were not approved by proper departments and managers. Nearly half (47%) of GenAI users use personal AI apps (so-called “Shadow AI”), leaving the organization with no insight into the type of data being shared and the tools reading those files.
As a result, the number of incidents where users send sensitive data to AI apps has doubled in the past year.
Now, the average organization sees a staggering 223 incidents per month. Netskope also said personal apps are a “significant insider threat risk,” as 60% of insider threat incidents involved personal cloud app instances.
Regulated data, intellectual property, source code, and credentials are often sent to personal app instances in violation of organizational policies.
“Organizations will struggle to maintain data governance as sensitive information flows freely into unapproved AI ecosystems, leading to an increase in accidental data exposure and compliance risks,” the report concludes.
“Attackers will conversely take advantage of this fragmented environment and leverage AI to perform hyper-efficient reconnaissance and create highly customized attacks targeting proprietary models and training data.”
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



