- XpertStation WS300 supports models with trillions of parameters without relying on cloud infrastructure
- Two 400 GbE LAN ports enable high-speed distributed multi-node AI workloads
- Unified HBM3e GPU and LPDDR5X CPU memory maximize bandwidth for AI
MSI has officially launched the XpertStation WS300, a desktop AI workstation based on Nvidia’s DGX Station architecture.
This system is designed to handle demanding large language model, generative AI and advanced data science workloads.
The platform is powered by the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip and supports up to 748 GB of total, large contiguous memory.
The article continues below
Unified memory architecture for high-bandwidth AI processing
The XpertStation WS300 combines HBM3e GPU memory with LPDDR5X CPU memory for high-bandwidth data sharing.
This configuration allows local processing of trillion-parameter models and supports extensive AI workflows without relying on cloud infrastructure.
The workstation includes two 400 GbE LAN ports, which enable multi-node distributed computing with up to 800 Gbps total bandwidth.
MSI claims the XpertStation WS300 delivers data center-class performance directly to the desktop environment, with its setup intended to help organizations move from experimentation to production while maintaining consistent computing reliability.
The XpertStation WS300 supports the full AI lifecycle, including large-scale model training, data-intensive analysis and real-time inference.
By acting as a centralized AI compute node, the platform enables collaborative fine-tuning and on-demand deployment, but retains control over its data and intellectual property rights.
High-speed PCIe Gen5 and Gen6 NVMe storage accelerates dataset ingestion and AI pipelines, ensuring sustained utilization during compute-intensive operations.
Combined with the Nvidia AI Software Stack, the workstation integrates hardware and software to allow seamless workflow transitions from research to production environments.
MSI also integrated Nvidia NemoClaw, an open source stack that runs OpenShell in a policy-driven sandbox.
This enables autonomous AI agents to work continuously and securely at the desktop using the workstation’s 20petaFLOPS computing potential.
The configuration always supports active AI processes locally, enabling experimentation with advanced AI and robotics applications without transferring sensitive workloads to cloud servers.
“MSI has a strategic vision to advance AI-first computing,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions.
“With Nvidia, we’re defining the next era of AI infrastructure, bridging the gap between centralized performance and distributed innovation, enabling organizations to move from experimentation to production with greater speed, scale and confidence.”
The platform offers extensive capabilities for advanced AI workflows, but its $84,999.99 price raises concerns about cost-effectiveness.
Organizations that do not require maximum memory or continuous trillion-parameter model operation may find the investment difficult to justify.
The system delivers unprecedented local AI performance, enabling demanding computing at the desktop.
However, the practical value of this workstation is likely limited to enterprises with high-capacity AI workloads and specific infrastructure requirements.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



