Ai2's Olmo 2 1B Model Outperforms Rivals from Google and Meta

Ai2 has released Olmo 2 1B, a small AI model that surpasses similarly-sized models from Google, Meta, and Alibaba on key benchmarks.

The Allen Institute for AI (Ai2) has released Olmo 2 1B, a 1-billion-parameter AI model that outperforms similarly-sized models from Google, Meta, and Alibaba on several benchmarks, announced on their website. Olmo 2 1B is available under the Apache 2.0 license on Hugging Face, and Ai2 has provided the code and datasets used for its development.

Olmo 2 1B is designed to be accessible, running efficiently on consumer-grade hardware like modern laptops and mobile devices. It was trained on a dataset of 4 trillion tokens from various sources, enabling it to excel in arithmetic reasoning and factual accuracy tests, outperforming Google's Gemma 3 1B, Meta's Llama 3.2 1B, and Alibaba's Qwen 2.5 1.5B.

Despite its capabilities, Ai2 cautions that Olmo 2 1B can produce problematic outputs, including harmful or sensitive content, and recommends against its use in commercial settings.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following us on social media:

Subscribe to Daily AI Brief

Daily report covering major AI developments and industry news, with both top stories and complete market updates

Market report

AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation

ModelOp

The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.

Read more