FEATURE | VALUE |
---|---|
Context Window (Tokens) | 131,072 |
Max Output Tokens | 128 |
Max File Size | 25MB |
Token Generation Speed | - |
Input Token Price | $0.20 per million tokens |
Output Token Price | $0.20 per million tokens |
Tool Use | |
JSON Mode | |
Image Support |
Unlock the full potential of content moderation with Llama-Guard-4-12B - optimized for exceptional performance on Groq hardware now:
pip install groq
1from groq import Groq
2client = Groq()
3completion = client.chat.completions.create(
4 model="meta-llama/Llama-Guard-4-12B",
5 messages=[
6 {
7 "role": "user",
8 "content": "Explain why fast inference is critical for reasoning models"
9 }
10 ]
11)
12print(completion.choices[0].message.content)