mistralai

Mistral: Mistral Small 3.1 24B

Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated version is [Mistral Small 3.2](mistralai/mistral-small-3.2-24b-instruct)

Input Cost
$0.35
per 1M tokens
Output Cost
$0.56
per 1M tokens
Context Window
128,000
tokens
Compare vs GPT-4o
Developer ID: mistralai/mistral-small-3.1-24b-instruct

Related Models

mistralai
$0.10/1M

Mistral: Ministral 3 3B 2512

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny...

📝 131,072 ctx Compare →
mistralai
$0.10/1M

Mistral: Mistral Small Creative

Mistral Small Creative is an experimental small model designed for creative writing, narra...

📝 32,768 ctx Compare →
mistralai
$0.02/1M

Mistral: Mistral Nemo

A 12B parameter model with a 128k token context length built by Mistral in collaboration w...

📝 131,072 ctx Compare →