- EU AI law requires AI explanation and accountability
- Only 38% of workers can pinpoint who is responsible in their company
- More than half (59%) aren’t even sure how quickly they could shut down AI in a crisis
Despite rapid AI adoption, new research from ISACA suggests that many businesses may be going in blind – more than half (59%) of UK businesses would not even know how quickly they could stop AI during a crisis.
Only around one in five (21%) say they feel confident stopping an AI system within 30 minutes, highlighting major security gaps.
And it’s not just shutting them down that’s a problem – not even half (42%) say they could explain an AI failure to management or regulators.
The article continues below
Are companies blind to the risks of AI?
ISACA explained that the gaps not only relate to business operations and reputation, but also from a legal framework. The EU AI law requires explanation and accountability.
Part of the fault comes down to unclear accountability, with 20% of workers unsure who is responsible for AI failures. Poor visibility is also a contributing factor, with one in three organizations not requiring the use of AI in the workplace to be disclosed, which ISACA says is a nightmare for blind spots.
The report explains that companies are currently treating the problem as a technical problem, but should instead focus on the fact that it is an organization-wide governance challenge. “Truly closing the gap cannot be done by process changes alone,” wrote Chief Global Strategy Officer Chris Dimitriadis. “Rather, it will require professionals who have the expertise to rigorously evaluate AI risk, embedding oversight across the entire lifecycle.”
Looking ahead, companies are encouraged to define senior-level accountability and begin rolling out better visibility and auditing. In addition to this, they must also build AI incident response into their strategies and factor it into their broader cyber security postures.
With only 38% of respondents identifying the board or a director as being responsible in the event of an AI incident, it is clear that more needs to be done to communicate information and processes throughout the workforce.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



