- AI -team still favors nvidia but rivals like Google, AMD and Intel growing their share
- Examination reveals budget limits, power requirements and cloud hinders
- GPU -deficiency pushes work loads to cloud while efficiency and tests remain overlooked
AI hardware spending begins to develop as teams weigh performance, financial considerations and scalability, have raised new research.
Liquid Web’s latest AI -hardware survey examined 252 trained AI subject people, and found, while Nvidia remains the most used hardware supplier, its rivals are increasingly gaining traction.
Nearly a third of the respondents reported to use alternatives such as Google TPUs, AMD GPUs or Intel chips at least part of their workload.
The pitfalls of skipping DUE DILONTENCE
The sample size is admittedly small, so does not catch the full scale of global adoption, but the results show a clear shift in how teams start thinking about infrastructure.
A single team can insert hundreds of GPUs so that even limited adoption of non-nvidia settings can make a big difference for the hardware footprint.
Nvidia is still preferred by over two -thirds (68%) of the teams examined, and many buyers do not compare carefully alternatives until they decide.
About 28% of respondents admitted to skipping structured evaluations, and in some cases it led the lack of testing to inconsistent infrastructure and underpowered setups.
“Our research shows that jumping Due Diligence leads to delayed or canceled initiatives – a costly mistake in a rapid moving industry,” said Ryan MacDonald, CTO on the Liquid Web.
Confidentiality and past experience are among the strongest drivers of GPU choices. Forty three percent of participants quoted these factors compared to 35%that appreciated costs and 37%that went to performance testing.
Budget restrictions also weigh heavily with 42% scaling of projects and 14% cancellation of them completely thanks to hardware shortages or costs.
Hybrid and cloud-based solutions become standard. More than half of the respondents said they are using both locals and Sky systems, and many expect to increase cloud costs when the year goes by.
Dedicated GPU hosting is seen by some as a way to avoid the benefits of benefits that come with shared or fractional hardware.
Energy consumption remains challenging. While 45% recognized efficiency as important, only 13% optimized for it. Many also lamented power, cooling and setback in the supply chain.
While Nvidia continues to dominate the market, it is clear that the competition is closing the gap. Teams find that balancing costs, efficiency and reliability are almost as important as raw performance when building AI infrastructure.



