MCP Server Development — Common Questions Answered
Model Context Protocol (MCP) is changing how AI applications interact with external tools and data sources. If you're building an MCP server, you probably have questions about architecture, tool design, error handling, and real-world deployment. This guide answers the most common questions developers ask about MCP server development.
Key Takeaways
- MCP servers expose tools and resources to AI clients through a standardized protocol
- Python and TypeScript are the most mature SDK ecosystems for MCP server development
- Proper error handling and input validation separate production servers from prototypes
- Search APIs like SearchHive's SwiftSearch integrate naturally as MCP tools
What Is an MCP Server?
An MCP (Model Context Protocol) server is a lightweight service that exposes tools, resources, and prompts to AI clients like Claude, GPT-based apps, or custom agents. Think of it as a bridge between an LLM and external systems — databases, APIs, file systems, web services.
The protocol was introduced by Anthropic in late 2024 and has quickly become the standard way to give AI models access to real-world data and actions. Instead of hardcoding tool integrations into every AI app, you build an MCP server once and any MCP-compatible client can use it.
MCP uses free JSON formatter-RPC 2.0 over stdio or SSE (Server-Sent Events) transports. Servers advertise their capabilities during initialization, then clients call tools or request resources as needed.
How Do I Get Started with MCP Server Development?
Start by choosing your language. The two most mature options:
Python SDK (mcp package):
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-server")
@mcp.tool()
def search_web(query: str) -> str:
"""Search the web for information."""
import httpx
resp = httpx.get(
"https://api.searchhive.dev/v1/swiftsearch",
params={"q": query, "limit": 5},
headers={"Authorization": "Bearer YOUR_API_KEY"}
)
data = resp.json()
results = []
for r in data.get("results", []):
results.append(f"{r['title']}: {r['snippet']}")
return "\n".join(results)
if __name__ == "__main__":
mcp.run(transport="stdio")
TypeScript SDK (@modelcontextprotocol/sdk):
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
server.tool("search_web", { query: z.string() }, async ({ query }) => {
const resp = await fetch(
`https://api.searchhive.dev/v1/swiftsearch?q=${encodeURIComponent(query)}&limit=5`,
{ headers: { Authorization: `Bearer ${process.env.API_KEY}` } }
);
const data = await resp.json();
return { content: data.results.map((r: any) => `${r.title}: ${r.snippet}`).join("\n") };
});
const transport = new StdioServerTransport();
await server.connect(transport);
Install the Python SDK with pip install mcp or the TypeScript SDK with npm install @modelcontextprotocol/sdk.
What Transports Does MCP Support?
MCP supports two transport mechanisms:
-
stdio — The server communicates via standard input/output. This is the default for local development and CLI tools. The AI client launches the server as a subprocess.
-
SSE (Server-Sent Events) — The server runs as an HTTP endpoint. Clients connect remotely. This is what you want for production deployments, shared tools, and multi-user scenarios.
stdio is simpler to debug (you can test in a terminal). SSE is better for deployment since multiple clients can connect to one server. Most frameworks let you switch between them with a single line change.
How Should I Structure My MCP Tools?
Good tool design is critical. LLMs need clear descriptions to use tools correctly.
Do:
- Use descriptive names (
search_web,read_database, nottool1) - Write thorough docstrings — the LLM reads these to decide when to call the tool
- Return structured text, not raw JSON dumps
- Validate inputs and return helpful error messages
- Keep tools focused on one responsibility
Don't:
- Create tools that do too many things (combine "search and summarize" into one tool)
- Return megabytes of raw data — summarize or paginate
- Use abbreviations the LLM won't understand
# Good tool design
@mcp.tool()
def get_company_funding(company_name: str) -> str:
"""Get the latest funding round for a company. Returns company name,
funding amount, round type, and date. Only returns the most recent round."""
# Implementation here
pass
# Bad tool design
@mcp.tool()
def get_data(query: str, type: str, limit: int = 100) -> str:
"""Get data."""
# Too vague, too many responsibilities
pass
How Do I Handle Errors in MCP Servers?
Error handling in MCP servers works through JSON-RPC error responses. When a tool fails, return an is_error: true flag with a human-readable message:
@mcp.tool()
def scrape_url(url: str) -> str:
"""Extract the main text content from a web page."""
try:
resp = httpx.get(
"https://api.searchhive.dev/v1/scrapeforge",
params={"url": url},
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=30.0
)
resp.raise_for_status()
data = resp.json()
return data.get("content", "No content extracted")
except httpx.TimeoutException:
return "Error: Request timed out. The website may be slow or blocking requests."
except httpx.HTTPStatusError as e:
return f"Error: HTTP {e.response.status_code}. The URL may be invalid or protected."
The is_error pattern lets the LLM know the tool call failed and it should try a different approach, rather than treating an error message as valid content.
Can MCP Servers Access Web Search and Scraping APIs?
This is one of the most popular use cases. MCP servers frequently wrap search and scraping APIs to give AI models real-time web access.
SearchHive's APIs map naturally to MCP tools:
- SwiftSearch → A
web_searchtool for real-time search results - ScrapeForge → A
scrape_pagetool for extracting content from specific URLs - DeepDive → A
deep_researchtool for multi-step research workflows
import httpx
import json
SEARCHHIVE_API_KEY = os.environ.get("SEARCHHIVE_API_KEY")
@mcp.tool()
def web_search(query: str, num_results: int = 5) -> str:
"""Search the web using SearchHive SwiftSearch. Returns titles,
URLs, and snippets from search results."""
resp = httpx.get(
"https://api.searchhive.dev/v1/swiftsearch",
params={"q": query, "limit": num_results},
headers={"Authorization": f"Bearer {SEARCHHIVE_API_KEY}"}
)
data = resp.json()
output = []
for r in data.get("results", []):
output.append(f"- **{r['title']}** ({r['url']}): {r['snippet']}")
return "\n".join(output) if output else "No results found."
@mcp.tool()
def scrape_page(url: str) -> str:
"""Extract clean text content from a web page using SearchHive ScrapeForge.
Handles JavaScript rendering and anti-bot bypass."""
resp = httpx.post(
"https://api.searchhive.dev/v1/scrapeforge",
json={"url": url, "format": "markdown"},
headers={"Authorization": f"Bearer {SEARCHHIVE_API_KEY}"}
)
data = resp.json()
return data.get("content", "Could not extract content from this page.")
This gives any MCP-compatible AI client instant access to live web search and page scraping without building those systems yourself.
How Do I Test an MCP Server Locally?
Use the MCP Inspector, a built-in debugging tool:
# Install the inspector
npx @anthropic/mcp-inspector
# Point it at your server
npx @anthropic/mcp-inspector python my_server.py
The inspector gives you a web UI to list tools, call them with parameters, and see responses. It's the fastest way to iterate during development.
For automated testing, the mcp Python SDK includes a client you can use in pytest:
import pytest
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
@pytest.mark.asyncio
async def test_search_tool():
server_params = StdioServerParameters(command="python", args=["my_server.py"])
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
result = await session.call_tool("web_search", arguments={"query": "test"})
assert len(result.content[0].text) > 0
How Do I Deploy an MCP Server to Production?
For production, you'll want SSE transport behind a reverse proxy:
- Containerize your server with Docker
- Use SSE transport instead of stdio
- Add authentication — verify API keys or tokens
- Rate limit tool calls per client
- Monitor — log tool invocations, latency, and errors
# Switch to SSE transport for production
if __name__ == "__main__":
mcp.run(transport="sse", host="0.0.0.0", port=8000)
Behind nginx or a load balancer, this gives you a production-ready MCP endpoint that any compatible client can connect to.
What Are Common MCP Server Development Mistakes?
Based on the growing MCP ecosystem, these are the pitfalls that trip up most developers:
- Skipping tool descriptions — The LLM relies on your docstrings. Vague descriptions lead to incorrect tool usage.
- Returning raw API responses — LLMs work best with summarized, structured text. Don't dump 50-line JSON objects.
- No input validation — Malformed URLs, empty queries, and out-of-range numbers will crash your server.
- Ignoring timeouts — External API calls can hang. Always set timeouts (10-30 seconds is reasonable for web requests).
- Not handling rate limits — If your server wraps paid APIs, track usage and return clear messages when limits are hit.
- stdio-only thinking — Design for SSE from the start. stdio works for development, but production needs HTTP transport.
How Does MCP Compare to Function Calling and OpenAPI?
| Feature | MCP | Function Calling | OpenAPI |
|---|---|---|---|
| Protocol | JSON-RPC 2.0 | Provider-specific | REST/HTTP |
| Discovery | Built-in capability negotiation | Manual registration | API docs |
| Transport | stdio + SSE | HTTP (varies) | HTTP |
| Tool definition | Inline in code | Schema in config | External spec |
| Client compatibility | Any MCP client | Provider-specific | Any HTTP client |
| Resources | Built-in | Not native | Not native |
MCP's main advantage is the capability negotiation — clients automatically discover what tools and resources a server provides. With function calling, you need to register each tool's schema manually with every provider.
What Are Resources vs. Tools in MCP?
- Tools are functions the LLM can call — like
search_web(query). They perform actions and return results. - Resources are data the LLM can read — like files, database records, or API responses. They're read-only and don't modify state.
Use tools when the LLM needs to perform an action. Use resources when the LLM needs to access static or semi-static data. Most web search and scraping integrations are tools, since they require dynamic lookups.
Summary
MCP server development is straightforward once you understand the protocol's patterns. Start with a focused tool set, write clear descriptions, handle errors gracefully, and deploy with SSE transport. Wrapping search and scraping APIs like SearchHive's gives your AI agents real-time web access with minimal code.
For more on building with search APIs in AI applications, check out /blog/complete-guide-to-news-monitoring-automation and /blog/how-to-data-extraction-pipeline-step-by-step.
Start Building with SearchHive
SearchHive gives you search, scraping, and deep research APIs that plug directly into your MCP servers. Get 500 free credits to start — no credit card required.
- SwiftSearch — Real-time web search results, perfect for MCP tool integrations
- ScrapeForge — Extract clean content from any URL with JavaScript rendering
- DeepDive — Multi-step research workflows for complex queries
Get your free API key and start building in minutes. Full API documentation available.