- ASICs are far more efficient than GPUs for inference, not unlike cryptocurrency mining
- The inference AI chip market is expected to grow exponentially by the end of this decade
- Hyperscalers like Google have already jumped on the bandwagon
Already a leader in AI and GPU technologies, Nvidia is moving into the Application-Specific Integrated Circuit (ASIC) market to address growing competition and changing trends in AI semiconductor design.
The global rise of generative AI and large language models (LLMs) has significantly increased the demand for GPUs, and Nvidia CEO Jensen Huang confirmed in 2024 that the company will recruit 1000 engineers in Taiwan.
Now, as reported by Taiwan’s Commercial Times (originally published in Chinese), the company has now established a new ASIC division and is actively recruiting talent.
The rise of inference chips
Optimized for AI learning tasks, Nvidia’s H-series GPUs have been widely used for training AI models. However, the AI semiconductor market is undergoing a shift towards inference chips or ASICs.
This increase is driven by demand for chips optimized for real-world AI applications such as large language models and generative AI. Unlike GPUs, ASICs offer superior efficiency for inference tasks as well as cryptocurrency mining.
According to Verified Market Research, the AI chip inference market is expected to grow from a 2023 value of $15.8 billion to $90.6 billion by 2030.
Major tech players, including Google, have already adopted custom ASIC designs in its AI chip “Trillium”, which was made generally available in December 2024.
The shift towards custom AI chips has intensified competition among semiconductor giants. Companies like Broadcom and Marvell have risen in relevance and stock value as they partner with cloud service providers to develop specialized chips for data centers.
To stay ahead, Nvidia’s new ASIC division is focusing on leveraging local expertise by recruiting from leading companies such as MediaTek.