FEATURE | VALUE |
---|---|
Context Window (Tokens) | 128k |
Max Output Tokens | 8k |
Max File Size | - |
Token Generation Speed | ~3100 tps |
Input Token Price | $0.04/1M tokens |
Output Token Price | $0.04/1M tokens |
Tool Use | |
JSON Mode | |
Image Support |
Experience lightning-fast text analysis and content generation with llama-3.2-1b-preview
with Groq speed now:
pip install groq
from groq import Groq
client = Groq()
completion = client.chat.completions.create(
model="llama-3.2-1b-preview",
messages=[
{
"role": "user",
"content": "Explain why fast inference is critical for reasoning models"
}
]
)
print(completion.choices[0].message.content)