Mistral AI Unveils Devstral Medium and Upgraded Devstral Small Models

Mistral AI has introduced Devstral Medium and an upgraded Devstral Small 1.1, enhancing agentic coding capabilities with improved performance and versatility.

Mistral AI has introduced two new models, Devstral Medium and an upgraded version of Devstral Small, known as Devstral Small 1.1. These models are part of a collaboration with All Hands AI, focusing on enhancing agentic coding capabilities.

Devstral Small 1.1, released under the Apache 2.0 license, maintains its 24 billion parameter architecture but offers significant improvements over its predecessor. It achieves a score of 53.6% on the SWE-Bench Verified benchmark, setting a new standard for open models without test-time scaling. The model is versatile, supporting both Mistral function calling and XML formats, and excels in generalizing to different prompts and coding environments.

Devstral Medium, available through Mistral AI's API, builds on the strengths of Devstral Small and achieves a score of 61.6% on the SWE-Bench Verified benchmark. It offers high performance at a competitive price, making it suitable for businesses and developers seeking cost-effective solutions. The model can be deployed on private infrastructure for enhanced data privacy and supports custom finetuning for specific use cases.

Both models are accessible via API, with Devstral Small 1.1 priced at $0.1 per million input tokens and $0.3 per million output tokens, while Devstral Medium is priced at $0.4 per million input tokens and $2 per million output tokens. These offerings underscore Mistral AI's commitment to providing open-source, high-performance models for the software development community.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following us on social media:

Subscribe to AI Programming Weekly

Weekly news about AI tools for software engineers, AI enabled IDE's and much more.

Market report

AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation

ModelOp

The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.

Read more