CHAI Expands GPU Cluster to 1.4 ExaFLOPS

CHAI has expanded its GPU cluster to reach 1.4 exaFLOPS, enhancing its AI capabilities, announced in a press release.

CHAI has expanded its GPU cluster to reach 1.4 exaFLOPS, announced in a press release. This expansion is part of CHAI's strategic investment in high-performance computing to support its next-generation AI models.

The company has allocated $20 million in 2025 to further enhance its compute capacity, positioning itself competitively among leading AI labs and startups. CHAI's in-house kCluster now utilizes thousands of cutting-edge GPUs, offering nearly 10 times the power of Stanford's Sherlock HPC Cluster.

This infrastructure supports over 51,000 large language models (LLMs) globally, enabling richer interactions and deeper personalization for users. CHAI, known for its engaging social AI platform, continues to innovate in AI research and product development.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like Silicon Brief.

Also, consider following us on social media:

Subscribe to Silicon Brief

Weekly coverage of AI hardware developments including chips, GPUs, cloud platforms, and data center technology.

Whitepaper

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

The 2025 AI Index by Stanford HAI provides a comprehensive overview of the global state of artificial intelligence, highlighting significant advancements in AI capabilities, investment, and regulation. The report details improvements in AI performance, increased adoption in various sectors, and the growing global optimism towards AI, despite ongoing challenges in reasoning and trust. It serves as a critical resource for policymakers, researchers, and industry leaders to understand AI's rapid evolution and its implications.

Read more