Tavily is a comprehensive web search, scraping, and crawling API designed specifically for AI agents. It provides real-time web access, content extraction, and advanced search capabilities. Combined with Groq's ultra-fast inference through MCP, you can build intelligent agents that research topics, monitor websites, and extract structured data in seconds.
Key Features:
pip install openai python-dotenvexport GROQ_API_KEY="your-groq-api-key"
export TAVILY_API_KEY="your-tavily-api-key"import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.groq.com/api/openai/v1",
api_key=os.getenv("GROQ_API_KEY")
)
tools = [{
"type": "mcp",
"server_url": f"https://mcp.tavily.com/mcp/?tavilyApiKey={os.getenv('TAVILY_API_KEY')}",
"server_label": "tavily",
"require_approval": "never",
}]
response = client.responses.create(
model="openai/gpt-oss-120b",
input="What are recent AI startup funding announcements?",
tools=tools,
temperature=0.1,
top_p=0.4,
)
print(response.output_text)Search within specific time ranges:
response = client.responses.create(
model="openai/gpt-oss-120b",
input="""Find AI model releases from past month.
Use tavily_search with:
- time_range: month
- search_depth: advanced
- max_results: 10
Provide details about models, companies, and capabilities.""",
tools=tools,
temperature=0.1,
)
print(response.output_text)Extract structured product data:
response = client.responses.create(
model="openai/gpt-oss-120b",
input="""Find iPhone models on apple.com.
Use tavily_search then tavily_extract to get:
- Model names
- Prices
- Key features
- Availability""",
tools=tools,
temperature=0.1,
)
print(response.output_text)Extract and compare content from multiple URLs:
urls = [
"https://example.com/article1",
"https://example.com/article2",
"https://example.com/article3"
]
response = client.responses.create(
model="openai/gpt-oss-120b",
input=f"""Extract content from: {', '.join(urls)}
Analyze and compare:
- Main themes
- Key differences in perspective
- Common facts
- Author conclusions""",
tools=tools,
temperature=0.1,
)
print(response.output_text)| Tool | Description |
|---|---|
tavily_search | Search with advanced filters (time, depth, topic, max results) |
tavily_extract | Extract full content from specific URLs |
tavily_scrape | Scrape single pages with clean output |
tavily_batch_scrape | Scrape multiple URLs in parallel |
tavily_crawl | Crawl websites with depth and pattern controls |
Search Depth:
basic - Fast, surface-level results (under 3 seconds)advanced - Comprehensive, deep results (5-10 seconds)Time Range:
day, week, month, yearTopic:
general, newsChallenge: Build an automated content curation system that monitors news sources, filters by relevance, extracts key information, generates summaries, and publishes daily digests!