The Model Context Protocol (MCP) is rapidly becoming the standard way to give AI agents access to external tools and data. Building an MCP search server lets any MCP-compatible client -- Claude Desktop, VS Code, custom agents -- perform web searches through a standardized interface.
This tutorial walks through building a complete MCP search server using SearchHive SwiftSearch as the search backend. You'll have a working server in under 30 minutes.
Prerequisites
- Python 3.10 or higher
- A SearchHive API key (free signup, 500 credits included)
- Basic familiarity with MCP concepts (tools, resources, prompts)
pipinstalled
What You'll Build
An MCP server that exposes two tools:
web_search-- search the web and return resultsscrape_page-- scrape a specific URL for full content
Any MCP client can then call these tools to search and retrieve web data.
Step 1: Set Up the Project
Create a new directory and install dependencies:
mkdir mcp-search-server && cd mcp-search-server
python -m venv venv
source venv/bin/activate
pip install mcp requests python-dotenv
Create your environment file:
# .env
SEARCHHIVE_API_KEY=sh_live_your_key_here
Step 2: Create the MCP Server
Create server.py:
import json
import os
import requests
from dotenv import load_dotenv
from mcp.server import Server
from mcp.types import Tool, TextContent
load_dotenv()
API_KEY = os.getenv("SEARCHHIVE_API_KEY")
server = Server("searchhive-search")
@server.list_tools()
async def list_tools():
return [
Tool(
name="web_search",
description="Search the web using SearchHive SwiftSearch. Returns titles, URLs, and snippets for the top results.",
inputSchema={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query string"
},
"limit": {
"type": "integer",
"description": "Number of results to return (default: 5, max: 20)",
"default": 5
},
"country": {
"type": "string",
"description": "Country code for localized results (default: us)",
"default": "us"
}
},
"required": ["query"]
}
),
Tool(
name="scrape_page",
description="Scrape a web page and return its content as clean markdown. Handles JavaScript rendering automatically.",
inputSchema={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the page to scrape"
}
},
"required": ["url"]
}
)
]
@server.call_tool()
async def call_tool(name, arguments):
if name == "web_search":
return await handle_search(arguments)
elif name == "scrape_page":
return await handle_scrape(arguments)
else:
return [TextContent(type="text", text=f"Unknown tool: {name}")]
async def handle_search(args):
query = args["query"]
limit = args.get("limit", 5)
country = args.get("country", "us")
response = requests.post(
"https://api.searchhive.dev/v1/search",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"query": query, "limit": limit, "country": country},
timeout=30
)
if response.status_code != 200:
return [TextContent(type="text", text=f"Search API error: {response.status_code}")]
data = response.json()
results = data.get("results", [])
if not results:
return [TextContent(type="text", text=f"No results found for: {query}")]
output_lines = [f"Search results for: {query}\n"]
for i, r in enumerate(results, 1):
output_lines.append(
f"{i}. {r.get('title', 'No title')}\n"
f" URL: {r.get('url', '')}\n"
f" {r.get('snippet', 'No description')}\n"
)
return [TextContent(type="text", text="\n".join(output_lines))]
async def handle_scrape(args):
url = args["url"]
response = requests.post(
"https://api.searchhive.dev/v1/scrape",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"url": url, "format": "markdown"},
timeout=60
)
if response.status_code != 200:
return [TextContent(type="text", text=f"Scrape API error: {response.status_code}")]
data = response.json()
markdown = data.get("markdown", "No content retrieved")
# Truncate very long pages to stay within context limits
if len(markdown) > 15000:
markdown = markdown[:15000] + "\n\n[Content truncated - full page exceeds context limit]"
return [TextContent(type="text", text=f"Content from {url}:\n\n{markdown}")]
if __name__ == "__main__":
import asyncio
from mcp.server.stdio import stdio_server
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
asyncio.run(main())
Step 3: Test the Server Locally
Run the server in stdio mode:
python server.py
To test without an MCP client, create a simple test script:
# test_server.py
import asyncio
import json
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client
async def test():
async with stdio_client("python", ["server.py"]) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
print("Available tools:", [t.name for t in tools.tools])
# Test web search
result = await session.call_tool("web_search", {
"query": "best python web scraping libraries 2026",
"limit": 3
})
print("\nSearch results:")
print(result.content[0].text)
# Test scrape
result = await session.call_tool("scrape_page", {
"url": "https://example.com"
})
print("\nScrape result (first 500 chars):")
print(result.content[0].text[:500])
asyncio.run(test())
Step 4: Configure Claude Desktop
Add your MCP server to Claude Desktop's config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"searchhive": {
"command": "python",
"args": ["/path/to/mcp-search-server/server.py"],
"env": {
"SEARCHHIVE_API_KEY": "sh_live_your_key_here"
}
}
}
}
Restart Claude Desktop. You'll see the searchhive server listed under the tools icon. Now you can ask Claude to search the web or scrape pages -- it will use your MCP server.
Step 5: Add Advanced Features
Rate Limiting
Add a simple rate limiter to avoid hitting API quotas:
import time
last_request_time = 0
MIN_INTERVAL = 1.0 # seconds between requests
async def rate_limited_request(url, payload):
global last_request_time
now = time.time()
wait = MIN_INTERVAL - (now - last_request_time)
if wait > 0:
await asyncio.sleep(wait)
last_request_time = time.time()
return requests.post(url, json=payload, timeout=30)
Error Handling with Retries
import asyncio
async def search_with_retry(query, limit=5, max_retries=3):
for attempt in range(max_retries):
try:
response = await asyncio.to_thread(
requests.post,
"https://api.searchhive.dev/v1/search",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"query": query, "limit": limit},
timeout=30
)
if response.status_code == 429:
wait = 2 ** attempt
await asyncio.sleep(wait)
continue
response.raise_for_status()
return response.json()
except requests.RequestException as e:
if attempt == max_retries - 1:
return {"error": str(e)}
await asyncio.sleep(2 ** attempt)
return {"error": "Max retries exceeded"}
Adding a Third Tool: Deep Research
Use SearchHive DeepDive for comprehensive topic research:
Tool(
name="deep_research",
description="Perform deep research on a topic. Scans multiple sources and returns a comprehensive research summary.",
inputSchema={
"type": "object",
"properties": {
"topic": {
"type": "string",
"description": "Topic to research"
},
"depth": {
"type": "string",
"enum": ["quick", "standard", "deep"],
"description": "Research depth level",
"default": "standard"
}
},
"required": ["topic"]
}
)
Common Issues
"Server not appearing in Claude Desktop": Double-check the config file path. Claude must be restarted after editing the config. Verify the Python path is absolute.
API key errors: Make sure your .env file is in the same directory as server.py, or pass the key directly in the Claude Desktop config env block.
Rate limit errors (429): SearchHive free tier allows 500 credits. Upgrade to Starter ($9/mo) for 5,000 credits with higher rate limits.
Timeout on large pages: Some pages take 10-20 seconds to render. Increase the timeout in the requests.post() call if you're hitting timeouts on complex sites.
Next Steps
- Add caching to avoid re-searching the same queries (Redis or SQLite)
- Build a web UI for the MCP server using Next.js
- Add more SearchHive tools: SwiftSearch, ScrapeForge, and DeepDive
- Package the server for pip installation
- Add streaming support for real-time result delivery
Complete Project Structure
mcp-search-server/
server.py # MCP server (main file)
test_server.py # Test script
.env # API key
requirements.txt # Dependencies
README.md # Documentation
For the full source code and additional examples, see the SearchHive documentation. Sign up at searchhive.dev to get your free API key with 500 credits.