
| FEATURE | VALUE |
|---|---|
| Context Window (Tokens) | 8192 |
| Max Output Tokens | N/A |
| Max File Size | N/A |
| Token Generation Speed | 1600 tokens per second |
| Input Token Price | $0.59 per 1M tokens |
| Output Token Price | $0.99 per 1M tokens |
| Tool Use | |
| JSON Mode | |
| Image Support |
Experience the speed of llama-3.3-70b-specdec on Groq:
pip install groqfrom groq import Groq
client = Groq()
completion = client.chat.completions.create(
model="llama-3.3-70b-specdec",
messages=[
{
"role": "user",
"content": "Explain why fast inference is critical for reasoning models"
}
]
)
print(completion.choices[0].message.content)