- Nearly 40% of IT workers admit to secretly use unauthorized generative AI tools
- Shadow AI is growing like training holes and fears of redundancies fuel hidden use
- AI tools used without supervision may leak sensitive data and bypass existing security protocols
As artificial intelligence becomes more and more embedded in the workplace, organizations are struggling to manage its adoption responsibly, says new research.
A Ivanti report has claimed that the growing use of unauthorized AI tools at work raises concerns about elaborating on skills and increasing security risks.
Among IT workers, over a third (38%) admits to using unauthorized generative AI tools, while almost half of the office workers (46%) say that some or all AI tools they are dependent on were not provided by their employers.
Some companies allow the use of AI
Interestingly, 44% of companies have integrated AI across departments, yet a large proportion of employees are secretly using unauthorized tools due to inadequate education.
One in three workers say they hide their AI use from management, often with reference to the “secret advantage” it provides.
Some employees avoid revealing their use of AI because they do not want to be perceived as incompetent.
With 27% reporting AI-driven impostor syndrome and 30% concerned that their roles can be replaced, the interruption also contributes to anxiety and burnout.
These behaviors point to a lack of trust and transparency and emphasize the need for organizations to establish clear and inclusive AI use policies.
“Organizations should consider building a sustainable AI management model, prioritizing transparency and tackling the complex challenge of AI-dampened imposter syndrome through reinvention,” said Ivanti’s most important legal adviser Brooke Johnson.
The hidden use of AI also poses a serious risk. Without proper supervision, unauthorized tools can leak data, bypass security protocols and postpone systems by attack, especially when used by administrators with elevated access.
Organizations should not respond by striking down, but by modernizing. This includes the establishment of inclusive AI policies and the implementation of safe infrastructure -starting with strong endpoint protection to detect ROGUE applications and ZTNA solutions to enforce strict access controls in distributed environments.
Ivanti notes that AI is not the problem; The real questions are unclear policies, weak security and a lack of trust. If not controlled, Shadow AI could extend the skill gap, Sile Mental Health, and compromise critical systems.