- Studies find that 42% of leaders identify as AI skeptics with exaggerated expectations
- Skeptics care about economic, psychological and physical risks at AI -Admitting
- Realists report clear benefits including better working quality and time efficiency
Artificial intelligence has reshaped jobs across the globe with the promise of efficiency, smarter decisions and new business opportunities.
As adoption accelerates, evidence indicates that not all leaders embrace AI tools with equal confidence.
Recent research from Adaptavist Group reveals a growing gap between those who trust their company’s AI claims, and those who fear the technology are overpowered.
A sharp gap between skeptics and realists
The report found that almost half (42%) of leaders identify as AI “skeptics”, believing that their business’s claims are exaggerated, while 36% consider AI “realists” are sure expectations are realistic.
For skeptics, the adoption of tools such as AI authors often comes uneasiness: 65% care that their organization’s approach puts customers on financial, psychological or physical risk.
Almost half of skeptical leaders fear being incorrectly accused of misunderstood AI, and 42% hide their AI use at work to avoid consequences.
Realists report far lower levels of anxiety, which shows that perception plays a key role in the design of job experience with AI and LLM technology.
In companies where skeptics dominate, AI is driven more by obligation than measurable results.
Fire and Fir’s percentage of skeptical leaders encourage AI-use because they feel they should, rather than because it delivers specific value.
Consumption is still high with over a third investment between £ 1 million. And £ 10 million In AI initiatives in the past year.
Yet, pressure and inadequate education (59% no formula AI education) continue to limit efficiency.
In comparison, realists promote experimentation, produce training and measure results in a way that supports both technology and people.
Leaders in realisted organizations report clearer benefits of AI, including improvements in work quality, time efficiency and output.
Ethical concerns such as plagiarism, bias and hallucinations are far less pronounced: Only 37% of realists marked ethical risks against 74% of skeptics.
They also spend less time correcting AI outputs, reflecting stronger guidance and support.
These findings are in line with the recent MIT assumptions that 95% of the generative AI pilots are failing, suggesting that organizational culture and preparation are crucial factors in AI success.
Quick spread of AI tools exacerbates the gap. Twenty -four percent of skeptics feel overwhelmed by too many tools too fast, while realists remain confident in AIS value.
“The contrast between leaders who are confident of their organization’s AI journey and those who are struggling with poor results, hasty implementations and a reluctant workforce are sharp,” said Jon Mort, CTO of Adaptavist Group.
“To lock AIS true value, organizations must be quick to experiment, but take time to think about rolling out by investing in training and creating an environment where both people and technology can thrive.”



