- After Deepseeks wave in popularity, NVIDIA AI NECK COMMUNDER
- Nvidia calls Deepseek an ‘excellent AI -Avancement’
- Nvidia suggests its GPUs are still critically important
If you hadn’t heard the fuss about Deepseek this weekend, there’s a good chance that you’ve at least heard the term now. It rose to fame because it gave a real competitor to chatgpt to a fraction of the price, and it has caused unrest in the stock market and see technical stock prices fall. Especially Nvidia suffered a record of $ 600 billion stock price falls, the largest stock price fall in history.
Deepseek is released by a Chinese start -up with the same name and is a free AI -Chatbot with ambitions to take on Openais Chatgpt. There are also new models with some multimodal capabilities, mainly in image creation and analysis. It has taken the AI world by storm and is still the biggest app in the Apple App Store in the US and the UK.
The app and the site proved to be popular where Deepseek experienced a power outage and a reported ‘malicious attack’ the same day it rose to fame.
While Sam Altman, Openai’s CEO, responded, we also heard from Nvidia, undoubtedly the global leader of AI chips, which has risen prominently as the AI wave has continued to grow.
In an E -Mail statement to Techradar, Nvidia wrote, “Deepseek is an excellent AI progress and a perfect example of test time scaling. Deepseeks work illustrates how new models can be created using this technique, utilizing widely available models and calculating that are fully export control compatible. Inference requires a significant number of NVIDIA GPUs and high performance networks. We now have three scaling laws: Premonition and continuing education that continue, and new test time scaling. “
It is definitely strong and calls Deepseek “an excellent AI progress” that speaks to the performance of the DEPSEKS R1 model. It also confirms what we knew: New models can be established using existing models and chips instead of creating brand new ones.
Nvidia clearly wants to remain an important part and note that this type of rollout requires a lot of NVIDIA GPUs and plays out of the fact that Deepseek used China-specific NVIDIA GPUs. When reading between the lines, it also suggests that Deepseek needs more of his chips … at some point.
Deepseek claims it used an innovative new training process to develop its LLMS using sample and mistakes to self -enhance. You could say that it trained its LLMs in the same way that humans learn by receiving feedback based on their actions. It also used a MOE (mix of experts) architecture, which means that it only activates a small fraction of its parameters at any given time, which significantly reduces the calculation costs, making it more efficient.
Sam Altman also praised Deepseeks R1 model, “Especially about what they are able to deliver for the price.” He reiterated that Openai “obviously will deliver much better models”, but he welcomed the competition. Nvidia seems to keep her future cards closer to your chest.
It’s still a pending game of kind to see when Deepseek AI will return to new registrations and get back to full performance, but if you are curious about its residence power, read my colleague Lance Ulanoffs-Techradar’s editor-in – Thoughts On his chances of staying around the US. As well as Our hands-on of Deepseek Ai versus chatgpt From John-Anthony Disotto, one of Techradar’s AI experts.