How to Give Claude Web Access — Search API Integration Guide
Anthropic's Claude models are powerful out of the box, but they have a training cutoff. To give Claude access to current information — pricing, news, documentation, or any live web data — you need to integrate a search API. This tutorial covers every approach, from Claude's new native web search to custom tool integrations with external APIs.
Key Takeaways
- Claude now has a built-in Web Search tool available through Anthropic's API (requires Claude 3.5+)
- For more control, you can define custom tools that call external search APIs
- SearchHive works as a Claude tool with search, scraping, and deep research capabilities
- The tool_use API format makes it straightforward to wire any HTTP API as a Claude tool
Prerequisites
- Python 3.9+
- Anthropic API key (from console.anthropic.com)
anthropicPython package:pip install anthropic requests
pip install anthropic requests
Step 1: Claude's Native Web Search Tool
Anthropic recently added a built-in web search tool to Claude. When enabled, Claude can autonomously decide to search the web and cite sources in its responses:
import anthropic
client = anthropic.Anthropic(api_key="your-anthropic-key")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
tools=[
{
"type": "web_search_20250305",
"name": "web_search",
"max_uses": 5
}
],
messages=[
{
"role": "user",
"content": "What are the latest updates to the Python 3.13 release?"
}
]
)
for block in message.content:
if block.type == "text":
print(block.text)
elif block.type == "tool_use":
print(f"[Searched: {block.input}]")
The native web search tool is convenient but limited — you can't control which search engine is used, customize result formatting, or add scraping capabilities. For production applications, custom tools give you more control.
Step 2: Custom Search Tool with SearchHive
SearchHive provides three APIs that pair perfectly with Claude's tool use format. Define them as custom tools to give Claude search, scraping, and deep research capabilities:
import anthropic
import requests
import json
SEARCHHIVE_KEY = "your-searchhive-key"
def call_searchhive_swift_search(query):
# Execute a SwiftSearch query
resp = requests.get(
"https://api.searchhive.dev/v1/swift-search",
headers={"Authorization": f"Bearer {SEARCHHIVE_KEY}"},
params={"query": query, "limit": 5}
)
data = resp.json()
return "\n".join(f"- {r['title']}: {r['snippet']}" for r in data.get("results", []))
def call_searchhive_scrape(url):
# Scrape a web page and return markdown content
resp = requests.post(
"https://api.searchhive.dev/v1/scrape-forge",
headers={"Authorization": f"Bearer {SEARCHHIVE_KEY}", "Content-Type": "application/json"},
json={"url": url, "format": "markdown"}
)
return resp.json().get("content", "Failed to scrape")[:4000]
def call_searchhive_deep_dive(query):
# Run a comprehensive deep research query
resp = requests.get(
"https://api.searchhive.dev/v1/deep-dive",
headers={"Authorization": f"Bearer {SEARCHHIVE_KEY}"},
params={"query": query}
)
return resp.json().get("summary", "No results")[:3000]
Step 3: Define Tool Schemas for Claude
Claude's tool_use format requires free JSON formatter schemas for each tool:
client = anthropic.Anthropic()
tools = [
{
"name": "swift_search",
"description": "Search the web for current information. Returns titles, URLs, and snippets. Use for factual lookups, pricing data, and news.",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The search query string"}
},
"required": ["query"]
}
},
{
"name": "scrape_forge",
"description": "Extract the full content of a web page as clean markdown. Use after finding a relevant URL via search.",
"input_schema": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "The full URL of the page to scrape"}
},
"required": ["url"]
}
},
{
"name": "deep_dive",
"description": "Conduct comprehensive research on a complex topic. Returns synthesized analysis with citations.",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The research question to investigate"}
},
"required": ["query"]
}
}
]
# Map tool names to handler functions
tool_handlers = {
"swift_search": call_searchhive_swift_search,
"scrape_forge": call_searchhive_scrape,
"deep_dive": call_searchhive_deep_dive,
}
Step 4: Run the Agent Loop
Claude may make multiple tool calls in a single response. Handle them all, then send results back:
def run_claude_with_search(user_message: str, max_turns: int = 6):
# Run Claude with web search, scraping, and deep research tools
messages = [{"role": "user", "content": user_message}]
for turn in range(max_turns):
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
tools=tools,
messages=messages
)
content_blocks = []
tool_uses = []
for block in response.content:
if block.type == "text":
content_blocks.append(block)
elif block.type == "tool_use":
tool_uses.append(block)
if not tool_uses:
return content_blocks[0].text if content_blocks else ""
messages.append({"role": "assistant", "content": response.content})
for tool_use in tool_uses:
handler = tool_handlers[tool_use.name]
result = handler(**tool_use.input)
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": result
}]
})
return "Max turns reached"
# Example usage
answer = run_claude_with_search(
"Research the top 5 web scraping APIs available in 2026. "
"Compare their pricing, features, and rate limits."
)
print(answer)
Step 5: Computer Use vs Search API
Claude's Computer Use feature gives it direct browser access, but it's slow (5-10 seconds per page) and expensive in tokens. SearchHive's ScrapeForge API is a better alternative for structured content extraction:
- Computer Use: ~5-10 seconds per page, high token cost
- ScrapeForge: ~1-2 seconds per page, low cost (1 credit per page)
For most use cases, search + scrape is faster and cheaper:
- SwiftSearch to find relevant URLs
- ScrapeForge to extract content from those URLs
- Claude synthesizes the information
Common Issues and Fixes
Tool not being called: Claude decides when to use tools. If it doesn't call a search tool, make the prompt explicitly mention that current information is needed. Phrases like "What is the current..." or "Look up the latest..." trigger tool use.
Rate limits: Claude's API has rate limits on tool calls. SearchHive's free tier (500 credits) is sufficient for testing. The Starter plan ($9/mo for 5,000 credits) handles most production workloads.
Long tool results: Claude has a 200K context window, but sending 10 pages of scraped content wastes tokens. Truncate ScrapeForge results to 3,000-4,000 characters.
Cost optimization: Each tool call adds tokens to the conversation. Use SwiftSearch first (lightweight, fast) and only fall back to DeepDive when the question genuinely requires multi-source synthesis.
Next Steps
- Get a SearchHive API key — free tier, no credit card
- Check the Anthropic docs for the latest tool_use API updates
- Read /blog/openai-function-calling-with-web-search-apis for the OpenAI equivalent
- See /compare/tavily for a full comparison of search API options
Related: /tutorials/anthropic-mcp-integration | /compare/serpapi