
CoreWeave Launches NVIDIA GB200 Grace Blackwell Systems
CoreWeave has launched NVIDIA GB200 Grace Blackwell systems at scale, with initial customers including IBM, Mistral AI, and Cohere, announced in a press release. These systems, part of CoreWeave's cloud services, are designed to advance AI model development and deployment.
The NVIDIA GB200 NVL72 rack-scale systems, combined with CoreWeave's cloud services, provide AI innovators with advanced networking and NVIDIA Grace Blackwell Superchips. These are specifically built for reasoning and agentic AI applications. CoreWeave's platform is optimized for performance and reliability, offering services like CoreWeave Kubernetes Service and Slurm on Kubernetes.
CoreWeave's rapid deployment of these systems underscores its commitment to providing cutting-edge AI infrastructure. The company has also recently achieved a new industry record in AI inference with these superchips, as reported in the latest MLPerf v5.0 results.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like Silicon Brief.
Also, consider following us on social media:
More from: Chips & Data Centers
AlphaSense AI Identifies Market Shifts Amid Proposed Semiconductor Tariffs
Townsend Group Invests in CleanArc's Hyperscale Data Centers
Peachtree Corners Integrates NVIDIA AI for Smart City Advancements
Google's Project Mica Secures $10 Billion Tax Break in Kansas City
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates