Built-in (or server-side) tools are the easiest way to add agentic capabilities to your application. Unlike remote MCP where you connect to external servers, or local tool calling where you implement functions yourself, built-in tools require zero orchestration.
Just call the API, specify which tools you want to allow the model to use, and Groq's systems will handle the rest - tool execution, orchestration, and returning the final answer.
Built-in tools are not HIPAA Covered Cloud Services under Groq's Business Associate Addendum at this time. These tools are also not available currently for use with regional / sovereign endpoints.
With built-in tools, execution happens entirely on Groq's servers. The model autonomously calls built-in tools (web search, code execution) and handles the entire agentic loop internally. You get one response with everything completed.
Your App → Makes request to Groq API with allowed_tools parameter ↓ Groq API → Makes request to LLM with built-in tool definitions from the allowed_tools parameter ← Model returns tool_calls with built-in tool names (or, if no tool calls are needed, returns final response) ↓ Groq API → Parses tool call arguments server-side → Makes request to built-in tool with tool call arguments ← Built-in tool returns results ↓ Groq API → Makes another request to LLM with tool results ← Model returns more tool_calls (returns to step 3), or returns final response ↓ Your App
Groq's Compound systems are purpose-built for agentic workflows with a full suite of built-in tools:
Models:
groq/compound - Supports multiple tools per requestgroq/compound-mini - Single tool per request, 3x lower latencyAvailable Tools:
| Tool | Identifier |
|---|---|
| Web Search | web_search |
| Code Execution | code_interpreter |
| Visit Website | visit_website |
| Browser Automation | browser_automation |
| Wolfram Alpha | wolfram_alpha |
How to use Compound systems:
from groq import Groq
client = Groq()
completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is the current weather in Tokyo?",
}
],
# Change model to compound to use built-in tools
model="groq/compound",
)
print(completion.choices[0].message.content)
# Print all tool calls
# print(completion.choices[0].message.executed_tools)import Groq from "groq-sdk";
const groq = new Groq();
export async function main() {
const completion = await groq.chat.completions.create({
messages: [
{
role: "user",
content: "What is the current weather in Tokyo?",
},
],
// Change model to compound to use built-in tools
model: "groq/compound",
});
console.log(completion.choices[0]?.message?.content || "");
// Print all tool calls
// console.log(completion.choices[0]?.message?.executed_tools || "");
}
main();curl https://api.groq.com/openai/v1/chat/completions -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-d '{
"model": "groq/compound",
"messages": [{
"role": "user",
"content": "What is the current weather in Tokyo?"
}]
}'The system automatically determines which tools to use based on the query and executes them server-side. You can optionally restrict which tools are available using the compound_custom.tools.enabled_tools parameter (see Configuring Tools).
Example Response:
{
"id": "stub",
"object": "chat.completion",
"created": 1761750004,
"model": "groq/compound",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "**Current weather in Tokyo (as of the latest report on Oct 29 2025, 11:00 pm JST)**\\n\\n| Parameter | Value |\\n|-----------|-------|\\n| **Temperature** | 53 °F ≈ 12 °C |...",
"executed_tools": [{
"index": 0,
"type": "search",
"arguments": "{\"query\": \"current weather in Tokyo\"}",
"output": "Title: Weather for Tokyo, Japan...\\n..."
}]
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 116,
"completion_tokens": 571,
"total_tokens": 7340
}
}The executed_tools array shows which tools were called during the request, including the arguments passed and the results returned.
Use the compound_custom.tools.enabled_tools parameter to restrict which tools are available. Pass an array of tool identifiers: web_search, code_interpreter, visit_website, browser_automation, wolfram_alpha.
from groq import Groq
client = Groq(
default_headers={
"Groq-Model-Version": "latest"
}
)
response = client.chat.completions.create(
model="groq/compound",
messages=[
{
"role": "user",
"content": "Search for recent AI developments and then visit the Groq website"
}
],
compound_custom={
"tools": {
"enabled_tools": ["web_search", "visit_website"]
}
}
)import Groq from "groq-sdk";
const groq = new Groq({
defaultHeaders: {
"Groq-Model-Version": "latest"
}
});
const response = await groq.chat.completions.create({
model: "groq/compound",
messages: [
{
role: "user",
content: "Search for recent AI developments and then visit the Groq website"
}
],
compound_custom: {
tools: {
enabled_tools: ["web_search", "visit_website"]
}
}
});curl "https://api.groq.com/openai/v1/chat/completions" \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${GROQ_API_KEY}" \
-H "Groq-Model-Version: latest" \
-d '{
"messages": [
{
"role": "user",
"content": "Search for recent AI developments and then visit the Groq website"
}
],
"model": "groq/compound",
"compound_custom": {
"tools": {
"enabled_tools": ["web_search", "visit_website"]
}
}
}'Groq's Compound systems only support built-in tools and cannot be used with local tool calling or remote MCP tools.
For more details, see the Compound Built-In Tools documentation.
OpenAI's open-weight models support a subset of built-in tools:
Models:
openai/gpt-oss-120bopenai/gpt-oss-20bAvailable Tools:
| Tool | Identifier |
|---|---|
| Browser Search | browser_search |
| Code Execution | code_interpreter |
Limitations:
How to use GPT-OSS models:
from groq import Groq
client = Groq()
# Automatically uses tools when needed
response = client.chat.completions.create(
model="openai/gpt-oss-120b",
messages=[{
"role": "user",
"content": "What's the current population of Tokyo?"
}]
)
# Or specify which tool to enable
response = client.chat.completions.create(
model="openai/gpt-oss-120b",
messages=[{
"role": "user",
"content": "Search for recent AI developments"
}],
tools=[{"type": "browser_search"}]
)
print(response.choices[0].message.content)import Groq from "groq-sdk";
const client = new Groq();
// Automatically uses tools when needed
const response = await client.chat.completions.create({
model: "openai/gpt-oss-120b",
messages: [{
role: "user",
content: "What's the current population of Tokyo?"
}]
});
// Or specify which tool to enable
const responseWithTool = await client.chat.completions.create({
model: "openai/gpt-oss-120b",
messages: [{
role: "user",
content: "Search for recent AI developments"
}],
tools: [{ type: "browser_search" }]
});
console.log(response.choices[0].message.content);GPT-OSS models are ideal when you need a large context window (131K tokens) with basic tool capabilities.
Use the tools parameter with tool type objects. You can specify browser_search or code_interpreter.
# Single tool
tools=[{"type": "browser_search"}]
# Or multiple tools
tools=[{"type": "browser_search"}, {"type": "code_interpreter"}]// Single tool
const tools = [{ type: "browser_search" }];
// Or multiple tools
const tools = [{ type: "browser_search" }, { type: "code_interpreter" }];GPT-OSS models can be used alongside local tool calling or remote MCP tools in the same request.
To see which tools were used in a request, check the executed_tools field in the response:
import os
from groq import Groq
client = Groq(api_key=os.environ.get("GROQ_API_KEY"))
response = client.chat.completions.create(
model="groq/compound",
messages=[
{"role": "user", "content": "What did Groq release last week?"}
]
)
# Log the tools that were used to generate the response
print(response.choices[0].message.executed_tools)import Groq from 'groq-sdk';
const groq = new Groq();
async function main() {
const response = await groq.chat.completions.create({
model: 'groq/compound',
messages: [
{
role: 'user',
content: 'What did Groq release last week?'
}
]
})
// Log the tools that were used to generate the response
console.log(response.choices[0].message.executed_tools)
}
main();curl https://api.groq.com/openai/v1/chat/completions -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-d '{
"model": "groq/compound",
"messages": [{
"role": "user",
"content": "What did Groq release last week?"
}]
}'