How to Use a Search API for Developers — Step-by-Step
Search APIs let you programmatically retrieve search engine results, turning Google, Bing, and other engines into data sources for your applications. Whether you're building an AI agent, a price comparison tool, or a content monitoring system, a reliable search API is a fundamental building block.
This tutorial walks through setting up and using a search API from scratch, using SearchHive's SwiftSearch as the primary example.
Prerequisites
- Python 3.9+ installed
- A SearchHive account (free -- 500 credits included)
- pip install searchhive requests
Step 1: Understand What a Search API Does
A search API sends a query to a search engine on your behalf and returns structured results. Instead of scraping Google's HTML (which is fragile and often blocked), you make a clean REST API call and get back free JSON formatter data.
Typical response fields include:
- Title and URL of each result
- Snippet/description from the search engine
- Position/ranking of the result
- Optional: thumbnail URLs, cached pages, related searches
Step 2: Get Your API Key
Sign up at searchhive.dev and generate an API key from the dashboard. The free tier includes 500 credits, enough to follow this entire tutorial and build a small project.
Always store API keys in environment variables, never in code:
# Linux/macOS
export SEARCHHIVE_API_KEY="sh_live_your_key_here"
# Windows
set SEARCHHIVE_API_KEY=sh_live_your_key_here
Step 3: Make Your First Search Request
Install the SDK and make a basic search call:
pip install searchhive
import os
from searchhive import SwiftSearch
# Initialize the client
client = SwiftSearch(api_key=os.environ["SEARCHHIVE_API_KEY"])
# Perform a basic Google search
results = client.search(
query="Python web scraping tutorial",
engine="google",
num=10
)
# Print the results
for i, result in enumerate(results, 1):
print(f"{i}. {result.title}")
print(f" {result.url}")
print(f" {result.snippet}")
print()
Each call to client.search() returns structured result objects with typed fields -- no manual JSON parsing needed.
Step 4: Add Parameters for Precision
Search APIs support various parameters to narrow results:
results = client.search(
query="best project management tools",
engine="google",
num=20, # Number of results (1-100)
country="us", # Country for localized results
language="en", # Language preference
# recency="week", # Only results from the past week
# start=11, # Pagination (second page)
)
Common use cases for parameter combinations:
- Local business search:
country="gb",query="coffee shops London" - Recent news:
recency="day",query="AI regulation" - Multi-language SEO: Loop through
languagevalues for the same query - Deep pagination: Use
startparameter to get beyond the first 10 results
Step 5: Handle Errors and Rate Limits
Production code needs proper error handling. Search APIs can return errors for rate limiting, invalid queries, or temporary issues:
import time
def search_with_retry(query, max_retries=3):
"""Search with exponential backoff on rate limits."""
for attempt in range(max_retries):
try:
results = client.search(query=query, engine="google", num=10)
return results
except Exception as e:
error_str = str(e)
if "429" in error_str and attempt < max_retries - 1:
# Rate limited -- wait and retry
wait_time = 2 ** attempt # 1s, 2s, 4s
print(f"Rate limited, waiting {wait_time}s...")
time.sleep(wait_time)
elif "401" in error_str:
print("Authentication failed -- check your API key")
return []
elif "402" in error_str:
print("Credits exhausted -- upgrade your plan")
return []
else:
print(f"Error: {e}")
if attempt == max_retries - 1:
return []
time.sleep(1)
return []
Step 6: Build a Practical Application
Let's build a simple competitor mention tracker -- a common use case for search APIs:
import json
from datetime import datetime
def track_competitor_mentions(competitors, queries_per_competitor=3):
"""Search for mentions of each competitor and return results."""
all_results = {}
for competitor in competitors:
queries = [
f'{competitor} features',
f'{competitor} pricing',
f'{competitor} vs',
f'{competitor} review',
f'{competitor} news',
][:queries_per_competitor]
competitor_results = []
for query in queries:
results = search_with_retry(query)
for r in results:
competitor_results.append({
"title": r.title,
"url": r.url,
"snippet": r.snippet,
"query": query,
})
all_results[competitor] = competitor_results
print(f"Found {len(competitor_results)} mentions for {competitor}")
return all_results
# Run the tracker
competitors = ["SearchHive", "SerpAPI", "Firecrawl"]
report = track_competitor_mentions(competitors)
# Save the report
filename = f"competitor-report-{datetime.now().strftime('%Y-%m-%d')}.json"
with open(filename, "w") as f:
json.dump(report, f, indent=2)
print(f"Report saved to {filename}")
Step 7: Integrate with Your Application
For web applications, wrap the search client in a service layer:
# search_service.py
from searchhive import SwiftSearch
from functools import lru_cache
import os
class SearchService:
def __init__(self):
self.client = SwiftSearch(api_key=os.environ["SEARCHHIVE_API_KEY"])
@lru_cache(maxsize=256)
def cached_search(self, query, engine="google", num=10):
"""Cache results to avoid duplicate API calls."""
return self.client.search(query=query, engine=engine, num=num)
def search_and_rank(self, query, domain_filter=None, num=20):
"""Search and optionally filter by domain."""
results = self.cached_search(query, num=num)
if domain_filter:
results = [
r for r in results
if domain_filter in r.url
]
return results
# Usage in a Flask/FastAPI endpoint
service = SearchService()
@app.get("/api/search")
def search_endpoint(q: str, domain: str = None):
results = service.search_and_rank(q, domain_filter=domain)
return {"query": q, "results": [{"title": r.title, "url": r.url} for r in results]}
Step 8: Monitor Your Usage
Track API consumption to avoid surprise charges:
def batch_search_with_budget(queries, daily_budget=1000):
"""Search multiple queries while respecting a daily budget."""
results = []
credits_used = 0
for query in queries:
if credits_used >= daily_budget:
print(f"Budget reached ({daily_budget} credits). Stopping.")
break
search_results = search_with_retry(query)
results.extend(search_results)
credits_used += len(search_results)
print(f"Used {credits_used}/{daily_budget} credits")
return results
SearchHive's dashboard also shows real-time usage per API key, so you can monitor consumption without writing custom tracking code.
Common Issues and Fixes
| Issue | Cause | Fix |
|---|---|---|
401 Unauthorized | Invalid or expired API key | Regenerate key in the dashboard |
429 Too Many Requests | Rate limit exceeded | Add exponential backoff, reduce concurrency |
402 Payment Required | Credits exhausted | Upgrade plan or wait for monthly reset |
| Empty results | Query too specific or blocked | Broaden query, try different engine |
| Slow responses | Complex queries or high load | Use smaller num values, add timeouts |
Next Steps
With the basics covered, you can extend your search integration:
- Combine search with scraping: Use SearchHive's ScrapeForge to get full page content from search results
- Build an AI agent: Feed search results into an LLM for summarization or question answering
- Automate monitoring: Schedule regular searches with cron expression generator and alert on new mentions
- Multi-engine comparison: Query both Google and Bing and compare rankings
Get Started with SearchHive
SearchHive's free tier includes 500 credits -- enough to prototype and test your integration before committing to a paid plan. The $9/month Starter plan provides 5K credits for light production use.
Sign up for free and check the API documentation for complete reference material and more code examples.
Related: /blog/best-search-api-for-developers | /blog/searchhive-vs-serper | /tutorials/build-ai-search-agent