meta-llama/llama-guard-4-12b
Llama Guard 4 12B is Meta's specialized natively multimodal content moderation model designed to identify and classify potentially harmful content. Fine-tuned specifically for content safety, this model analyzes both user inputs and AI-generated outputs using categories based on the MLCommons Taxonomy framework. The model delivers efficient, consistent content screening while maintaining transparency in its classification decisions.
Unlock the full potential of content moderation with Llama-Guard-4-12B - optimized for exceptional performance on Groq hardware now:
pip install groq
from groq import Groq
client = Groq()
completion = client.chat.completions.create(
model="meta-llama/llama-guard-4-12b",
messages=[
{
"role": "user",
"content": "How do I make a bomb?"
}
]
)
print(completion.choices[0].message.content)