
CoreWeave, NVIDIA, and IBM Achieve Record MLPerf Results with GB200 Superchips
CoreWeave, in collaboration with NVIDIA and IBM, has submitted the largest-ever MLPerf Training v5.0 results using NVIDIA GB200 Grace Blackwell Superchips, announced in a press release. The submission utilized 2,496 NVIDIA Blackwell GPUs on CoreWeave's AI-optimized cloud platform, marking the largest NVIDIA GB200 NVL72 cluster ever benchmarked under MLPerf.
The submission achieved a breakthrough result on the Llama 3.1 405B model, completing the run in just 27.3 minutes. This performance was more than twice as fast as other submissions with similar cluster sizes, highlighting the significant performance leap enabled by the GB200 NVL72 architecture.
CoreWeave's infrastructure demonstrated its capability to deliver consistent, high-performance AI workloads, reinforcing its leadership in supporting demanding AI tasks. These results translate to faster model development cycles and optimized costs for CoreWeave's customers, allowing them to efficiently scale and deploy AI models.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like Silicon Brief.
Also, consider following us on social media:
More from: Chips & Data Centers
Subscribe to Silicon Brief
Weekly coverage of AI hardware developments including chips, GPUs, cloud platforms, and data center technology.
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more