Mistral Saba 24B

Back to models page
Mistral AI Logo
Mistral Saba 24B is a specialized model trained to excel in Arabic, Farsi, Urdu, Hebrew, and Indic languages. Designed for high-performance multilingual capabilities, it delivers exceptional results across a wide range of tasks in these languages while maintaining strong performance in English. With a 32K token context window and tool use capabilities, it's ideal for complex multilingual applications requiring deep language understanding and regional context.

Key Technical Specifications

Model Architecture

Built on a 24B dense transformer architecture, Mistral Saba is specifically optimized for Arabic and South Asian languages while maintaining strong general capabilities. The model features advanced multilingual training techniques to ensure high performance across its target languages.

Performance Metrics

The model demonstrates exceptional performance across multilingual benchmarks:
  • MBZUAI Arabic MMLU (0-shot): 77.39
  • Arabic MT-Bench-dev (internal translation & gpt-4o-2024-08-06 judge): 7.93
  • English MT-Bench dev (gpt-4o-2024-05-13 judge): 7.98

Technical Details

FEATUREVALUE
Context Window (Tokens)32K
Max Output Tokens-
Max File Size-
Token Generation Speed~330 tps
Input Token Price$0.79/1M tokens
Output Token Price$0.79/1M tokens
Tool UseSupported
JSON ModeSupported
Image SupportNot Supported

Use Cases

Translation & Language Support
Translates between Arabic, Farsi, Urdu, Hebrew, and Indic languages while preserving cultural context and nuance. Valuable for international businesses, educational institutions, and government agencies requiring accurate cross-cultural communication.
Content Creation & Adaptation
Creates and adapts content across multiple languages while maintaining message integrity. Helps organizations develop culturally relevant materials for Arabic and South Asian markets, benefiting content creators, marketers, and educators.

Best Practices

  • Language Context: Provide prompts in the target language for optimal performance and cultural relevance
  • Context Window: Leverage the 32K token capacity for comprehensive documents and extended conversations
  • Few-shot prompting: Include examples in your prompts when working with complex linguistic or cultural tasks

Get Started with Mistral Saba 24B

Experience the exceptional multilingual capabilities of mistral-saba-24b with Groq speed:

pip install groq
1from groq import Groq
2client = Groq()
3completion = client.chat.completions.create(
4    model="mistral-saba-24b",
5    messages=[
6        {
7            "role": "user",
8            "content": "Explain why fast inference is critical for reasoning models"
9        }
10    ]
11)
12print(completion.choices[0].message.content)