ZeroPoint Technologies Introduces AI-MX for Enhanced Memory Optimization

ZeroPoint Technologies Introduces AI-MX for Enhanced Memory Optimization

ZeroPoint Technologies has announced AI-MX, a new hardware-accelerated memory optimization product that increases foundational model addressable memory by 50%, according to a press release.

ZeroPoint Technologies has unveiled a new hardware-accelerated memory optimization product called AI-MX, which promises to increase the addressable memory of foundational models by 50% announced in a press release. This innovative solution is designed to enhance the performance of enterprise and hyperscale datacenters by enabling a 1.5 times increase in memory capacity, bandwidth, and tokens served per second for applications relying on large foundational models.

The AI-MX product is set to be delivered to initial customers and partners in the second half of 2025. It operates with low nanosecond latencies, making it significantly faster than traditional compression algorithms. The product is compatible with various memory types, including HBM, LPDDR, GDDR, and DDR, ensuring broad applicability across AI acceleration use cases.

ZeroPoint Technologies' CEO, Klas Moreau, highlighted the potential cost savings for companies operating large-scale datacenters, as AI-MX addresses the growing demand for memory capacity, power, and bandwidth. The company aims to further enhance the capacity and performance of AI-MX in future iterations, addressing the critical needs of today's hyperscale and enterprise data center operators.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following our LinkedIn page AI Brief.

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates