Clarifai's GPT-OSS-120B Model Tops Performance and Cost Efficiency Rankings
Clarifai has achieved a significant milestone with its GPT-OSS-120B model, which has been ranked at the top for performance and cost efficiency by Artificial Analysis. Announced in a press release, the model demonstrated an impressive output speed of 313 tokens per second and a Time to First Token (TTFT) latency of just 0.27 seconds.
The benchmark analysis highlights Clarifai's model as a leading choice for AI workloads that require speed, flexibility, and reliability. The model's cost efficiency is underscored by a blended price of $0.16 per million tokens, making it an attractive option for customers seeking high performance without being tied to specific hardware vendors.
Clarifai's platform supports a variety of deployment environments, including serverless, dedicated instances, and multi-cloud setups, providing customers with the flexibility to deploy and scale models efficiently. This achievement reinforces Clarifai's position as a leader in AI infrastructure, offering a robust solution for modern AI applications.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Daily AI Brief.
Also, consider following us on social media:
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more