The Model Context Protocol (MCP) has become the standard way AI agents discover and use external tools. Instead of hardcoding every integration, agents connect to MCP servers that expose capabilities through a universal protocol. This case study examines how MCP tools work for AI agents and how SearchHive fits into the ecosystem.
Background
Before MCP, every AI agent integration was custom work. If you wanted your agent to search the web, scrape a page, query a database, or interact with a SaaS tool, you wrote bespoke code for each one. Every new tool meant new integration code, new error handling, and new maintenance burden.
Anthropic introduced the Model Context Protocol in late 2024 to solve this. MCP provides a standardized way for:
- AI agents to discover available tools at runtime
- Tool providers to expose capabilities without per-agent customization
- Developers to build reusable tool servers that work with any MCP-compatible agent
The Challenge
A typical AI agent in production needs 10-20 external tools:
- Web search (multiple engines)
- Page scraping and content extraction
- Database queries
- File system access
- Email sending
- Calendar management
- API calls to internal services
- Code execution sandboxes
- Browser automation
- Notification systems
Without MCP, each tool requires custom integration code. With MCP, you connect the agent to tool servers and the protocol handles discovery, authentication, and communication.
The Solution: MCP Tools for AI Agents
MCP uses a client-server architecture:
- MCP Client -- Runs inside the AI agent (Claude Desktop, Cursor, custom agents). Discovers and calls tools.
- MCP Server -- Hosts tools and exposes them via the MCP protocol. Can run locally or remotely.
How MCP Tool Discovery Works
# Conceptual MCP client flow (actual implementation uses SDK)
# 1. Agent connects to MCP server
mcp_client = MCPClient("stdio", command="npx", args=["-y", "searchhive-mcp-server"])
# 2. Agent discovers available tools
tools = mcp_client.list_tools()
# Returns:
# [
# {"name": "swift_search", "description": "Search the web using SearchHive SwiftSearch"},
# {"name": "scrape_forge", "description": "Scrape and extract content from any URL"},
# {"name": "deep_dive", "description": "Deep analysis of search results with content extraction"}
# ]
# 3. Agent calls a tool when needed
result = mcp_client.call_tool("swift_search", {
"query": "latest AI news",
"engine": "google",
"limit": 5
})
The agent doesn't need to know about SearchHive's API structure, authentication, or response format. The MCP server handles all of that.
Implementation: SearchHive as an MCP Server
Here's how SearchHive's three core APIs map to MCP tools:
SwiftSearch -- Real-time Web Search
# MCP tool definition for SwiftSearch
{
"name": "swift_search",
"description": "Search the web for real-time information across Google, Bing, and other engines. Returns structured results with titles, URLs, snippets, and rankings.",
"inputSchema": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"},
"engine": {"type": "string", "enum": ["google", "bing", "duckduckgo"], "default": "google"},
"limit": {"type": "integer", "minimum": 1, "maximum": 50, "default": 10},
"country": {"type": "string", "description": "Country code (us, uk, de, etc.)", "default": "us"}
},
"required": ["query"]
}
}
ScrapeForge -- Web Content Extraction
{
"name": "scrape_forge",
"description": "Extract structured content from any web page. Handles JavaScript rendering, CAPTCHAs, and bot protection automatically.",
"inputSchema": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "URL to scrape"},
"format": {"type": "string", "enum": ["markdown", "html", "text", "json"], "default": "markdown"},
"include_metadata": {"type": "boolean", "default": true}
},
"required": ["url"]
}
}
DeepDive -- Enhanced Search with Content
{
"name": "deep_dive",
"description": "Search the web and automatically extract full content from top results. Combines search and scraping in one call.",
"inputSchema": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"},
"depth": {"type": "string", "enum": ["basic", "standard", "detailed"], "default": "standard"},
"include_content": {"type": "boolean", "default": true},
"max_pages": {"type": "integer", "minimum": 1, "maximum": 10, "default": 3}
},
"required": ["query"]
}
}
Building a Multi-Tool Agent with MCP
Here's a practical example of an agent that uses multiple MCP servers:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def create_mcp_client(command, args, env=None):
server_params = StdioServerParameters(command=command, args=args, env=env)
read_stream, write_stream = await stdio_client(server_params).__aenter__()
session = ClientSession(read_stream, write_stream)
await session.__aenter__()
await session.initialize()
return session
async def agent_loop():
# Connect to multiple MCP servers
searchhive = await create_mcp_client(
"npx", ["-y", "searchhive-mcp-server"],
env={"SEARCHHIVE_API_KEY": "your_key"}
)
filesystem = await create_mcp_client("npx", ["-y", "@anthropic/mcp-server-filesystem", "/tmp"])
github = await create_mcp_client("npx", ["-y", "@anthropic/mcp-server-github"])
# Discover all tools
sh_tools = await searchhive.list_tools()
fs_tools = await filesystem.list_tools()
gh_tools = await github.list_tools()
all_tools = sh_tools.tools + fs_tools.tools + gh_tools.tools
# Agent can now use any of these tools based on user requests
# Example: "Search for AI papers and save them to a file"
search_results = await searchhive.call_tool("swift_search", {
"query": "transformer architecture improvements 2026",
"limit": 5
})
await filesystem.call_tool("write_file", {
"path": "/tmp/ai_papers.txt",
"content": str(search_results)
})
await searchhive.__aexit__()
await filesystem.__aexit__()
await github.__aexit__()
asyncio.run(agent_loop())
/tutorials/searchhive-api-quickstart
Results
Using MCP with SearchHive provides several concrete benefits:
Development speed: Adding SearchHive to an MCP-compatible agent takes minutes, not hours. No custom integration code needed.
Consistency: Every agent gets the same tool interface. Whether you're using Claude Desktop, a custom agent, or Cursor, the SearchHive tools work identically.
Composability: MCP tools compose naturally. An agent can search, scrape, write to files, and query databases in a single workflow without custom glue code.
Cost efficiency: SearchHive's credit system works well with MCP because agents only make API calls when they actually need to. The free tier (500 credits/mo) handles prototyping. The Starter plan ($9/mo for 5,000 credits) covers moderate agent usage.
Lessons Learned
-
Tool descriptions matter more than you think. The LLM decides which tool to call based on the tool description. Write clear, specific descriptions that help the model choose correctly.
-
Input schemas should be strict. Use enums and minimum/maximum values to prevent the agent from sending invalid parameters. This reduces API errors and wasted credits.
-
Caching at the MCP server level is powerful. If multiple agents share a SearchHive MCP server, the server can cache responses across agents, reducing duplicate API calls.
-
Error messages should be actionable. When an API call fails, return a message the LLM can understand and relay to the user. "Rate limited, retry in 5 seconds" is better than "HTTP 429".
-
Start with search, add scraping later. Most agents need search more than scraping. Prioritize SwiftSearch in your MCP server, then add ScrapeForge and DeepDive as the agent's capabilities expand.
Get Started
SearchHive works natively with MCP-compatible agents. Sign up free for 500 API credits, then connect via the MCP server. See the docs for setup instructions and configuration examples.