Nvidia to License Groq’s AI Chip Technology in $20 Billion Agreement
Nvidia has entered a non-exclusive licensing agreement with AI chip startup Groq, a deal valued at approximately $20 billion. The agreement covers Groq’s inference technology and includes the hiring of several key executives, including founder and CEO Jonathan Ross and president Sunny Madra. Groq’s chief financial officer, Simon Edwards, will assume the role of CEO following the transaction.
The arrangement allows Nvidia to integrate Groq’s low-latency processors into its AI factory architecture to support a wider range of real-time and inference workloads. Despite the scale of the deal, Groq will continue operating as an independent company, and its GroqCloud service will remain unaffected.
Groq’s flagship product, the LPU (Language Processing Unit), is designed for AI inference tasks and is claimed to deliver up to ten times greater energy efficiency than conventional graphics processors. The chip’s architecture includes on-chip SRAM and a deterministic processing design that minimizes delays in computation. Groq also connects its LPUs using a proprietary interconnect technology called RealScale, developed to improve synchronization across AI servers.
The reported $20 billion price represents a significant premium over Groq’s $6.9 billion valuation from its September funding round. The transaction marks Nvidia’s largest technology licensing deal to date and adds to a series of similar agreements aimed at expanding its AI hardware capabilities.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Enterprise AI Brief, Silicon Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Enterprise
More from: Data Centers
Subscribe to Enterprise AI Brief
Weekly report on AI business applications, enterprise software releases, automation tools, and industry implementations.
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more