Python developers building search-powered applications — AI agents, RAG pipelines, price monitors, research tools, chatbots — have more search API options than ever. The challenge isn't finding a search API; it's finding the right one for your specific use case.
Some APIs return ranked blue links. Some return AI-synthesized answers. Some scrape full page content inline. Some run their own index. Some just proxy Google results.
This guide cuts through the noise. For each API, you'll find the pricing, the Python SDK, a working code example, and a clear recommendation for when it's the right choice.
Key Takeaways
- SearchHive SwiftSearch combines search + content extraction + deep research in one API — best for full pipeline projects
- Google Custom Search free JSON formatter API is being deprecated (existing customers until Jan 2027) — avoid for new projects
- Serper.dev offers the cheapest Google SERP data at $1/1K searches at scale
- Tavily is purpose-built for AI/LLM applications and returns structured answers
- Brave Search API runs an independent index — useful for search result diversity
- Bing Web Search API at $3/1K is the cheapest major-engine option
- Exa AI uses neural search for semantic matching — fundamentally different from keyword search
1. SearchHive SwiftSearch
Best for: Developers who need search + scraping + research from one platform
SwiftSearch is part of SearchHive's three-product suite alongside ScrapeForge (scraping) and DeepDive (research). Credits are shared across all three — one API key covers everything.
| Tier | Price | Credits | Cost per 1K |
|---|---|---|---|
| Free | $0 | 500 | $0 |
| Starter | $9/mo | 5,000 | $1.80 |
| Builder | $49/mo | 100,000 | $0.49 |
| Unicorn | $199/mo | 500,000 | $0.40 |
from searchhive import SwiftSearch, ScrapeForge
search = SwiftSearch(api_key="sh_live_...")
# Basic web search
results = search.search("python machine learning tutorials")
for r in results["organic"]:
print(f"{r['title']} — {r['url']}")
# Search with inline content extraction
results = search.search(
"best python web frameworks 2026",
extract_content=True,
num_results=5
)
for r in results["organic"]:
if r.get("content"):
# Full page text included in the response
print(f"Content length: {len(r['content'])} chars")
# Combine with scraping for full pipeline
scrape = ScrapeForge(api_key="sh_live_...")
for r in results["organic"]:
page = scrape.scrape(r["url"], format="markdown")
# Now you have search results + full page content
print(page["markdown"][:200])
The extract_content=True parameter is the differentiator — it fetches and returns the actual page text for each search result in one API call. For RAG applications, this eliminates the need for a separate scraping step, cutting pipeline latency roughly in half.
When to choose SwiftSearch: You're building an AI application that needs both search and content extraction. You want one API key, one SDK, and one invoice.
2. Serper.dev
Best for: Google SERP data at the lowest per-search cost
Serper.dev provides Google search results via a clean REST API. Pay-as-you-go with volume discounts.
| Tier | Price | Searches | Cost per 1K |
|---|---|---|---|
| Free | $0 | 2,500 | $0 |
| Tier 1 | $50 | 50,000 | $1.00 |
| Tier 2 | $375 | 500,000 | $0.75 |
| Tier 3 | $1,250 | 2,500,000 | $0.50 |
Credits valid for 6 months. No official Python SDK — use requests directly or a community wrapper.
import requests
import json
API_KEY = "your_serper_key"
def google_search(query, num=10):
response = requests.post(
"https://google.serper.dev/search",
headers={
"X-API-KEY": API_KEY,
"Content-Type": "application/json"
},
json={"q": query, "num": num}
)
return response.json()
results = google_search("python web scraping libraries")
for r in results.get("organic", []):
print(f"[{r.get('position')}] {r['title']}")
print(f" {r['link']}")
if r.get("snippet"):
print(f" {r['snippet'][:150]}")
# Knowledge graph results
kg = results.get("knowledgeGraph", {})
if kg:
print(f"Knowledge: {kg.get('title', 'N/A')}")
The response includes organic results, knowledge graph, people also ask, related searches, and answer box data. No official Python SDK, but the API is simple enough that requests works fine.
When to choose Serper.dev: You need Google-specific SERP data (rankings, features, knowledge panels) at the lowest cost per search. You don't need content extraction or multi-engine support.
3. Tavily
Best for: AI agents and RAG pipelines that need structured search results
Tavily was designed from the ground up for LLM-powered applications. Every feature optimizes for AI consumption rather than human browsing.
| Tier | Price | API Calls | Cost per 1K |
|---|---|---|---|
| Free | $0 | 1,000 | $0 |
| Pro | $40/mo | 5,000 | $8.00 |
| Plus | $120/mo | 25,000 | $4.80 |
| Business | $600/mo | 100,000 | $6.00 |
Pay-as-you-go at $0.008/credit beyond your plan.
from tavily import TavilyClient
client = TavilyClient(api_key="tvly-...")
# AI-optimized search
results = client.search(
query="what are the best python web frameworks",
search_depth="advanced",
include_answer=True,
include_raw_content=True,
max_results=5
)
# Direct answer — no need to pipe through an LLM
print("ANSWER:", results["answer"])
# Results with relevance scores
for r in results["results"]:
print(f"[Score: {r['score']}] {r['title']}")
print(f" URL: {r['url']}")
print(f" Content: {r['content'][:200]}...")
The include_answer=True parameter returns a synthesized answer string — Tavily handles the extraction and summarization. This saves you an LLM call per search, which adds up fast in agent workflows.
search_depth="advanced" fetches and processes the actual page content, providing more accurate answers at the cost of slightly higher latency (~3-5 seconds vs ~1 second for basic search).
When to choose Tavily: You're building an AI agent, chatbot, or RAG system and want search results pre-formatted for LLM consumption. The include_answer feature alone justifies the higher per-search cost.
4. Brave Search API
Best for: Independent search results with AI summarization
Brave Search runs its own web index — not a Google or Bing reskin. This independence matters when you need result diversity or want to avoid Google's algorithmic biases.
| Tier | Price | Queries | Cost per 1K |
|---|---|---|---|
| Free | $0 | 2,000/month | $0 |
| Base | $3 | 1,000 | $3.00 |
| Data for AI | $5 | 1,000 | $5.00 |
import requests
API_KEY = "your_brave_key"
def brave_search(query, count=10):
response = requests.get(
"https://api.search.brave.com/res/v1/web/search",
headers={"X-Subscription-Token": API_KEY, "Accept": "application/json"},
params={"q": query, "count": count}
)
return response.json()
results = brave_search("python async web scraping")
for r in results.get("web", {}).get("results", []):
print(f"{r['title']}")
print(f" {r['url']}")
print(f" {r.get('description', '')[:150]}")
The summarizer endpoint (Data for AI tier) returns AI-generated summaries of search results:
response = requests.get(
"https://api.search.brave.com/res/v1/summarizer/search",
headers={"X-Subscription-Token": API_KEY, "Accept": "application/json"},
params={"q": "python web scraping best practices"}
)
summary = response.json()
print(summary.get("summary", "No summary available"))
When to choose Brave Search: You want results from an independent index, not Google. You're building a tool where search diversity matters. You like the $3/1K price point with a generous free tier.
5. Bing Web Search API
Best for: Cheapest major-engine search at scale
Microsoft's Bing Search API via Azure provides reliable search results at competitive pricing. The S1 tier at $3/1K is one of the cheapest paid options available.
| Tier | Price per 1K | Free Quota |
|---|---|---|
| F1 (Free) | $0 | 1,000/month |
| S1 | $3.00 | — |
| S2 | $6.00 | — |
| S3 | $12.00 | — |
import requests
API_KEY = "your_bing_key"
ENDPOINT = "https://api.bing.microsoft.com/v7.0/search"
def bing_search(query, count=10):
headers = {"Ocp-Apim-Subscription-Key": API_KEY}
params = {"q": query, "count": count, "mkt": "en-US"}
response = requests.get(ENDPOINT, headers=headers, params=params)
return response.json()
results = bing_search("python web scraping tutorial")
for r in results.get("webPages", {}).get("value", []):
print(f"{r['name']}: {r['url']}")
print(f" {r['snippet'][:150]}")
Bing's response includes web pages, news, images, videos, related searches, and entity information. The answer fields provide structured data for entities, computations, and time-sensitive queries.
When to choose Bing: You need a major search engine at the lowest possible cost. You already have an Azure account. You don't specifically need Google results.
6. Exa AI
Best for: Semantic/neural search rather than keyword matching
Exa (formerly Metaphor) uses embeddings-based neural search instead of traditional keyword matching. You describe what you're looking for in natural language.
| Product | Price per 1K |
|---|---|
| Search | $7.00 |
| Contents | $1.00/page |
| Answer | $5.00 |
| Deep Research | $12.00 |
import requests
API_KEY = "your_exa_key"
def neural_search(query, num_results=10):
response = requests.post(
"https://api.exa.ai/search",
headers={"x-api-key": API_KEY, "Content-Type": "application/json"},
json={
"query": query,
"num_results": num_results,
"type": "auto",
"contents": {"text": {"maxCharacters": 1000}}
}
)
return response.json()
# Natural language query — no keywords needed
results = neural_search(
"comprehensive guides about building scalable web scrapers with Python",
num_results=5
)
for r in results.get("results", []):
print(f"{r['title']}: {r['url']}")
print(f" Score: {r.get('score', 'N/A')}")
if r.get("text"):
print(f" {r['text'][:200]}...")
The contents parameter fetches page text inline, similar to SearchHive's extract_content. At $1.00/page, it's more expensive than SearchHive but the neural matching is unique.
When to choose Exa: You need semantic search — finding pages that match the meaning of your query, not specific keywords. Useful for research, content discovery, and finding niche sources.
Comparison Table
| Service | Free Tier | Entry Price | Cost per 1K | Python SDK | Content Extraction | AI Answers |
|---|---|---|---|---|---|---|
| SearchHive | 500 credits | $9/mo | $1.80 | Official | Yes (inline) | No |
| Serper.dev | 2,500 searches | $50 | $1.00 | Community | No | No |
| Tavily | 1,000 calls | $40/mo | $8.00 | Official | Yes | Yes |
| Brave Search | 2,000/mo | $3/1K | $3.00 | Community | Summarizer | Yes |
| Bing API | 1,000/mo | $3/1K | $3.00 | Azure SDK | No | Entities |
| Exa AI | Limited | $7/1K | $7.00 | Official | Yes ($1/page) | Yes ($5/1K) |
Decision Framework
Pick your primary use case:
RAG pipeline / AI knowledge base → SearchHive (search + extraction) or Tavily (AI-formatted results)
SERP monitoring / SEO tool → Serper.dev (cheapest Google SERP data)
Chatbot / Q&A system → Tavily (include_answer) or Brave (summarizer)
Research / content discovery → Exa AI (neural matching) or SearchHive (DeepDive)
General-purpose search on a budget → Serper.dev at $1/1K or Bing at $3/1K
Search result diversity → Brave Search (independent index)
Avoid: Google Custom Search JSON API for new projects (being deprecated). SerpApi unless you specifically need multi-engine support (40+ engines) and have the budget.
Recommendation
For Python developers starting a new search-powered project in 2026, SearchHive SwiftSearch is the strongest all-around choice. The free tier (500 credits) is enough to prototype, the search + extraction combo eliminates a whole integration, and pricing at $9/month is lower than any competitor's paid tier.
If you specifically need Google SERP features (rankings, knowledge panels, answer boxes), Serper.dev at $1/1K searches is unbeatable. For AI agent applications, Tavily is worth the premium for the include_answer feature.
Start with the free tier of whichever matches your use case. Track your actual consumption for a week. Then commit to a paid plan.
→ Full search API pricing comparison → Best search APIs for RAG pipelines → Get started with SearchHive — free, no credit card