deepseek-r1-distill-llama-70b
DeepSeek-R1-Distill-Llama-70B is a distilled version of DeepSeek's R1 model, fine-tuned from the Llama-3.3-70B-Instruct base model. This model leverages knowledge distillation to retain robust reasoning capabilities and deliver exceptional performance on mathematical and logical reasoning tasks with Groq's industry-leading speed.
Experience the reasoning capabilities of deepseek-r1-distill-llama-70b
with Groq speed now:
pip install groq
from groq import Groq
client = Groq()
completion = client.chat.completions.create(
model="deepseek-r1-distill-llama-70b",
messages=[
{
"role": "user",
"content": "Explain why fast inference is critical for reasoning models"
}
]
)
print(completion.choices[0].message.content)