CoreWeave Launches NVIDIA GB200 Grace Blackwell Systems

CoreWeave Launches NVIDIA GB200 Grace Blackwell Systems

CoreWeave has announced the launch of NVIDIA GB200 Grace Blackwell systems, with initial customers including IBM, Mistral AI, and Cohere.

CoreWeave has launched NVIDIA GB200 Grace Blackwell systems at scale, with initial customers including IBM, Mistral AI, and Cohere, announced in a press release. These systems, part of CoreWeave's cloud services, are designed to advance AI model development and deployment.

The NVIDIA GB200 NVL72 rack-scale systems, combined with CoreWeave's cloud services, provide AI innovators with advanced networking and NVIDIA Grace Blackwell Superchips. These are specifically built for reasoning and agentic AI applications. CoreWeave's platform is optimized for performance and reliability, offering services like CoreWeave Kubernetes Service and Slurm on Kubernetes.

CoreWeave's rapid deployment of these systems underscores its commitment to providing cutting-edge AI infrastructure. The company has also recently achieved a new industry record in AI inference with these superchips, as reported in the latest MLPerf v5.0 results.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like Silicon Brief.

Also, consider following us on social media:

Subscribe to Silicon Brief

Weekly coverage of AI hardware developments including chips, GPUs, cloud platforms, and data center technology.

Market report

AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation

ModelOp

The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.

Read more