- AI use explodes but most European companies still operate without clear rules or policies
- Organizations celebrate productivity gains while ignoring rising security threats from Deepfakes and AI abuse
- Employees use generative AI daily but few know when, where or how to
As generative AI gets traction over Europe’s workplaces, many organizations embrace its capabilities without establishing formal policies to guide its use.
According to Isaca, 83% of it and business people believe that AI is already used by staff within their organizations, but only 31% report the presence of a comprehensive internal AI policy.
The use of AI in the workplace comes with some benefits. Twenty -five percent of respondents say AI has already improved productivity, 71% quotes efficiency gains and time savings, while 62% are optimistic for AI to further improve their organizations over the next year.
Productivity gains without structure are a ticking bomb
However, AI applications are not universally positive, and regardless of what perceived gains they bring, come with warnings.
“The British government has made it clear through its AI possibilities for AI options that responsible AI admission is a national priority,” said Chris Dimitriadis, Isaca’s Chief Global Strategy Officer.
“AI threats develop rapidly, from Dybfakes to phishing, and without adequate education, investment and internal politics, companies will fight to keep up with bridging this risk action is important if the United Kingdom is to lead with innovation and digital trust.”
This dissonance between enthusiasm and regulation poses remarkable challenges.
Concerns that AI abuse is high and 64% of respondents are extremely or very concerned that generative AI is turned against them.
However, only 18% of organizations invest in tools to register Deepfakes, despite the fact that 71% expect their spread in the near future.
These figures reflect a clear risk action gap where the awareness of threats does not translate into meaningful protection measures.
The situation is further complicated by a lack of role -specific guidance. Without it, employees are left to decide when and how to use AI, which increases the risk of uncertain or inappropriate applications.
“Without guidance, rules or training in place about the extent to which AI can be used at work, employees can continue to use it in the wrong context or in an uncertain way. Likewise, they may not be able to see incorrect information or deep -fahal as they might if they were equipped with the right knowledge and tools.”
This absence of structure is not only a safety risk, but also a miss of the possibility of proper professional development.
Almost half of the respondents, 42%, believe they need to improve their AI knowledge within six months of staying competitive in their roles.
This marks an increase of 8% from the previous year and reflects a growing realization that skills development is critical.
Within two years, 89% expect to need emergence in AI, which emphasizes the urgent of formal training.
That said, companies that want the best AI tools, including the best LLM for coding and the best AI authors, must also explain the responsibilities that come with them.



