- Nvidia acquires SchedMD to maintain Slurm as open source workload management software
- Slurm manages scheduling and resources for large clusters running parallel AI tasks
- Nvidia launched Nemotron 3 models, including Nano, Super and Ultra sizes for AI tasks
Nvidia has announced a major expansion of its open source efforts, combining a software acquisition with new open AI models.
The company has announced that it has acquired SchedMD, the developer of Slurm, an open source workload management system widely used in high-performance computing and AI.
Nvidia will continue to operate Slurm as vendor-neutral software, ensuring compatibility with diverse hardware and maintaining support for existing HPC and AI customers.
Slurm and more
Slurm manages scheduling, queuing, and resource allocation across large computing clusters running parallel tasks.
More than half of the top 10 and top 100 supercomputers listed in the TOP500 rankings rely on the services, with enterprises, cloud providers, research labs and AI companies across industries using the system, including organizations in autonomous driving, healthcare, energy, financial services, manufacturing and government.
Slurm works with Nvidia’s latest hardware, and its developers continue to adapt it for high-performance AI workloads.
Alongside the acquisition, Nvidia also introduced the Nemotron 3 family of open models, including the Nano, Super and Ultra sizes.
The models use a hybrid mix of expert architecture to support multi-agent AI systems.
Nemotoron 3 Nano focuses on efficient task execution, Nemotoron 3 Super supports collaboration across multiple AI agents, and Nemotoron 3 Ultra handles complex reasoning workflows.
Nvidia supplies these models with associated datasets, reinforcement learning libraries, and NeMo Gym training environments.
Nemotron 3 models run on Nvidia accelerated computing platforms, including workstations and large AI clusters.
Developers can combine open models with proprietary systems in multi-agent workflows using public clouds or enterprise platforms.
Nvidia provides tools, libraries, and datasets to support training, evaluation, and deployment across diverse computing environments.
Nvidia has released three trillion tokens of pre-training, post-training and reinforcement data for Nemotoron 3 models.
Additional AI tools, including NeMo RL and NeMo Evaluator, offer model evaluation and safety assessment.
Early adopters integrating Nemotoron 3 include companies in software, cybersecurity, media, manufacturing and cloud services.
Nvidia has made open source models, AI tools and datasets available on GitHub and Hugging Face for developers building agentic AI applications.
“Open innovation is the foundation of AI progress,” Jensen Huang, founder and CEO of Nvidia, wrote in the company’s press release.
“With Nemotoron, we are transforming advanced artificial intelligence into an open platform that gives developers the transparency and efficiency they need to build agent systems at scale.”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



