- The Sophos Report finds that companies are concerned that Genai -deficiencies can hurt their cyber security
- 99% claim that AI is important when choosing a provider
- Human-first approach appears to be key, says Sophos
The emergence of artificial intelligence comes at the expense of increased cyber security threats, and companies are struggling to adapt, new research has claimed.
A report from Sophos revealed nine out of 10 (89%), which managers worry that shortcomings in the generative AI system could hurt their company’s cyber security strategies.
Despite this, almost all (99%) IT leaders are now considering AI capacities that are important when choosing a cyber security provider in the perfect example of fighting fire with fire.
AIS role in cyber security
Artificial intelligence has given threat actors new forces that transform unskilled attackers into more sophisticated code slices while making it more difficult for analysts to track the origin of threats.
One in five respondents hoped that AI will help them improve protection against cybertreats, with 14% in the hope of reduced employee burning.
However, it all comes at a price, with four out of five who think that new AI tools embedded in their cyber security solutions will increase the cost of tools. Still, 87%believe the savings will offset the original costs.
“We haven’t learned the machines to think; We have simply given them the context to accelerate the processing of large amounts of data, ”said Sopho’s Global Field CTO Chester Wisniewski and added companies should“ trust but verify ”Genai tools.
An overwhelming majority (98%) of the companies surveyed now have some form of AI ink in their cyber security infrastructure, but 84% are concerned about the pressure of reducing work forces due to an over-addiction of the technology.
Wisniewski added: “The potential for these tools to speed up security workload is great, but it still requires context and understanding of their human supervisors to realize this benefit.”
Looking ahead, Sophos encourages IT leaders to evaluate AI providers for things like quality and source of their training data, to establish measurable results they hope to achieve from AI and by using a human first approach.



