- 58-59% of workers admit to using shadow AI at work
- Datasets, employee names and financials are all shared with non-approved tools
- Could IT teams meet employees where they are to ensure better compliance?
With AI tools now a common sight in many businesses, new research from BlackFog has highlighted that while most (86%) employees say they now use AI for work testing at least weekly, three-fifths (58%) admit to using unapproved AI or free, publicly available tools instead of company-provided tools, putting their business at risk.
Enterprise-provided tools are important for delivering enterprise-class security, governance and privacy protection, but many workers complain that the AI they’re getting doesn’t fit what they need.
But more importantly, 63% believe it’s acceptable to use AI without IT approval, and 60% agree that unapproved AI is worth the security risk if it helps them meet deadlines, suggesting a clear disconnect between business goals and how they communicate them with staff.
Shadow AI is widespread in employee workflows
Shadow AI “should raise red flags for security teams and highlights the need for greater visibility and visibility into these security blind spots,” wrote BlackFog CEO Dr. Darren Williams.
This is because 33% of workers admit to having shared research or datasets with unapproved AI, 27% have shared employee data such as names, salary or performance, and 23% have shared financial or sales data.
But while it may be up to IT teams to double down on rules and expectations when it comes to AI, they face an uphill battle with more C-suite executives and senior executives believing that speed outweighs privacy and security than junior and administrative staff.
And BlackFog isn’t the only company to have revealed widespread use of shadow AI – Cyber news also found that 59% of workers use unapproved AI in the workplace, suggesting that an even higher 75% of users have shared sensitive information with these unapproved tools.
Similarly, the report found that 57% of workers’ direct managers support the use of unapproved artificial intelligence. “It creates a gray area where employees feel encouraged to use AI, but companies lose track of how and where sensitive information is shared,” warned security researcher Mantas Sabeckis.
Looking forward, there are two clear solutions to eradicating shadow AI. First, IT teams need to reiterate the risks involved and guide users to approved tools, but second, it’s clear that the current approved tools are not suitable for many employees, so IT teams should meet them where they are and offer enterprise-grade versions of these apps.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



