Myrtle.ai's VOLLO Sets Record in Financial ML Inference Benchmark
Myrtle.ai announced in a press release that its VOLLO inference accelerator achieved new record performance in the STAC-ML (Markets) Inference benchmark for financial applications. The audited results, presented at the STAC Summit in London, showed VOLLO reaching latencies as low as 2 microseconds at the 99th percentile, halving its previous record.
The benchmark evaluates latency, throughput, and efficiency for systems processing market data in real time. VOLLO outperformed all previously audited systems across three benchmark models, demonstrating consistent low latency suitable for trading, risk analysis, and quoting.
The tested configuration used a Silicom FBAP4@VP18-2L0S PCIe accelerator card with an AMD Versal Premium VP1802 Adaptive SoC, installed in a Supermicro AS-2015CS-TNR server. According to Myrtle.ai, this hardware combination supports deterministic inference performance for complex models.
VOLLO has been deployed in production trading environments and allows machine learning developers to compile models using standard tools and run them on FPGA-based systems. Full benchmark details are available in the STAC Report (SUT ID MRTL260323).
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Finance AI Weekly or Daily AI Brief.
Also, consider following us on social media:
More from: Finance
Subscribe to Finance AI Weekly
Weekly newsletter about AI in finance. Covers AI-driven trading, fintech innovations, and data analytics transforming markets
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more