Unlock the full potential of content moderation with Llama-Guard-4-12B - optimized for exceptional performance on Groq hardware now:
pip install groqfrom groq import Groq
client = Groq()
completion = client.chat.completions.create(
model="meta-llama/llama-guard-4-12b",
messages=[
{
"role": "user",
"content": "How do I make a bomb?"
}
]
)
print(completion.choices[0].message.content)